+---------+-----------------------------------------------------------------------------------+---------------------------------------------------+---------------------+---------------+---------+-------------------+------------------+----------------+-----------------------+----------------------------------+
| Index   | Op Name                                                                           | Mod Name                                          | Attr                | Dtype         | Scale   | Min               | Max              | Mean           | Var                   | Shape                            |
|---------+-----------------------------------------------------------------------------------+---------------------------------------------------+---------------------+---------------+---------+-------------------+------------------+----------------+-----------------------+----------------------------------|
| 0       | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | backbone.quant                                    | input               | torch.float32 |         | -0.8671875        | 0.8359375        | -0.1171943     | 0.0536020             | torch.Size([12, 3, 256, 704])    |
| 0       | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | backbone.quant                                    | output              | torch.float32 |         | -0.8671875        | 0.8359375        | -0.1171943     | 0.0536020             | torch.Size([12, 3, 256, 704])    |
| 1       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.0.0                          | input               | torch.float32 |         | -0.8671875        | 0.8359375        | -0.1171943     | 0.0536020             | torch.Size([12, 3, 256, 704])    |
| 1       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.0.0                          | weight              | torch.float32 |         | -0.4754249        | 0.7710248        | -0.0017089     | 0.0210140             | torch.Size([32, 3, 3, 3])        |
| 1       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.0.0                          | bias                | torch.float32 |         | -0.2555025        | 0.2229914        | 0.0079637      | 0.0182658             | torch.Size([32])                 |
| 1       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.0.0                          | output              | torch.float32 |         | -1.7594987        | 2.1449559        | 0.0153627      | 0.0311638             | torch.Size([12, 32, 128, 352])   |
| 2       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.0.1                          | input               | torch.float32 |         | -1.7594987        | 2.1449559        | 0.0153627      | 0.0311638             | torch.Size([12, 32, 128, 352])   |
| 2       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.0.1                          | weight              | torch.float32 |         | 0.6312608         | 1.2746012        | 0.9064816      | 0.0334574             | torch.Size([32])                 |
| 2       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.0.1                          | bias                | torch.float32 |         | -0.4685161        | 0.3284433        | 0.0063459      | 0.0345012             | torch.Size([32])                 |
| 2       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.0.1                          | running_mean        | torch.float32 |         | -0.3303403        | 0.2789666        | 0.0179354      | 0.0229389             | torch.Size([32])                 |
| 2       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.0.1                          | running_var         | torch.float32 |         | 0.0009879         | 0.1249751        | 0.0169960      | 0.0008847             | torch.Size([32])                 |
| 2       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.0.1                          | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 2       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.0.1                          | output              | torch.float32 |         | -11.4863758       | 10.2876215       | -0.0113519     | 0.4540018             | torch.Size([12, 32, 128, 352])   |
| 3       | torch.nn.modules.activation.ReLU                                                  | backbone.patch_embed.0.2                          | input               | torch.float32 |         | -11.4863758       | 10.2876215       | -0.0113519     | 0.4540018             | torch.Size([12, 32, 128, 352])   |
| 3       | torch.nn.modules.activation.ReLU                                                  | backbone.patch_embed.0.2                          | output              | torch.float32 |         | 0.0000000         | 10.2876215       | 0.2095613      | 0.1565833             | torch.Size([12, 32, 128, 352])   |
| 4       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.1.0                          | input               | torch.float32 |         | 0.0000000         | 10.2876215       | 0.2095613      | 0.1565833             | torch.Size([12, 32, 128, 352])   |
| 4       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.1.0                          | weight              | torch.float32 |         | -0.9394800        | 0.4024877        | -0.0098357     | 0.0058739             | torch.Size([64, 32, 3, 3])       |
| 4       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.1.0                          | bias                | torch.float32 |         | -0.1499553        | 0.1476765        | -0.0006753     | 0.0039396             | torch.Size([64])                 |
| 4       | torch.nn.modules.conv.Conv2d                                                      | backbone.patch_embed.1.0                          | output              | torch.float32 |         | -45.1234779       | 10.6227198       | -0.6148833     | 2.0302613             | torch.Size([12, 64, 64, 176])    |
| 5       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.1.1                          | input               | torch.float32 |         | -45.1234779       | 10.6227198       | -0.6148833     | 2.0302613             | torch.Size([12, 64, 64, 176])    |
| 5       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.1.1                          | weight              | torch.float32 |         | 0.7589851         | 1.2483623        | 0.9892360      | 0.0119031             | torch.Size([64])                 |
| 5       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.1.1                          | bias                | torch.float32 |         | -0.5897177        | 0.4495856        | 0.0158671      | 0.0444188             | torch.Size([64])                 |
| 5       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.1.1                          | running_mean        | torch.float32 |         | -1.7624836        | 2.6844971        | -0.5797317     | 0.8397173             | torch.Size([64])                 |
| 5       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.1.1                          | running_var         | torch.float32 |         | 0.9100102         | 10.0830956       | 3.2170339      | 3.2661383             | torch.Size([64])                 |
| 5       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.1.1                          | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 5       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.patch_embed.1.1                          | output              | torch.float32 |         | -17.4165134       | 6.5428586        | 0.0050764      | 0.5177718             | torch.Size([12, 64, 64, 176])    |
| 6       | torch.nn.modules.activation.ReLU                                                  | backbone.patch_embed.1.2                          | input               | torch.float32 |         | -17.4165134       | 6.5428586        | 0.0050764      | 0.5177718             | torch.Size([12, 64, 64, 176])    |
| 6       | torch.nn.modules.activation.ReLU                                                  | backbone.patch_embed.1.2                          | output              | torch.float32 |         | 0.0000000         | 6.5428586        | 0.2560138      | 0.1094660             | torch.Size([12, 64, 64, 176])    |
| 7       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.dwconv.0                | input               | torch.float32 |         | 0.0000000         | 6.5428586        | 0.2560138      | 0.1094660             | torch.Size([12, 64, 64, 176])    |
| 7       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.dwconv.0                | weight              | torch.float32 |         | -0.7628985        | 0.8584614        | -0.0037479     | 0.0448589             | torch.Size([64, 1, 3, 3])        |
| 7       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.dwconv.0                | bias                | torch.float32 |         | -0.4645726        | 0.3484828        | -0.0263360     | 0.0411845             | torch.Size([64])                 |
| 7       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.dwconv.0                | output              | torch.float32 |         | -5.4513474        | 3.7942870        | -0.0348324     | 0.0874424             | torch.Size([12, 64, 64, 176])    |
| 8       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.0.dwconv.1                | input               | torch.float32 |         | -5.4513474        | 3.7942870        | -0.0348324     | 0.0874424             | torch.Size([12, 64, 64, 176])    |
| 8       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.0.dwconv.1                | weight              | torch.float32 |         | 0.7171828         | 1.1681942        | 0.9451184      | 0.0080040             | torch.Size([64])                 |
| 8       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.0.dwconv.1                | bias                | torch.float32 |         | -0.1869041        | 0.2168626        | 0.0061239      | 0.0084568             | torch.Size([64])                 |
| 8       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.0.dwconv.1                | running_mean        | torch.float32 |         | -0.7515609        | 0.6238656        | -0.0458630     | 0.0744557             | torch.Size([64])                 |
| 8       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.0.dwconv.1                | running_var         | torch.float32 |         | 0.0019832         | 0.5234666        | 0.0599129      | 0.0096549             | torch.Size([64])                 |
| 8       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.0.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 8       | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.0.dwconv.1                | output              | torch.float32 |         | -11.2262669       | 11.6347666       | 0.0325299      | 0.6660396             | torch.Size([12, 64, 64, 176])    |
| 9       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv1                 | input               | torch.float32 |         | -11.2262669       | 11.6347666       | 0.0325299      | 0.6660396             | torch.Size([12, 64, 64, 176])    |
| 9       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv1                 | weight              | torch.float32 |         | -0.3771044        | 0.4624698        | -0.0007910     | 0.0080298             | torch.Size([128, 64, 1, 1])      |
| 9       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv1                 | bias                | torch.float32 |         | -0.3051038        | 0.1933560        | -0.0611908     | 0.0079311             | torch.Size([128])                |
| 9       | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv1                 | output              | torch.float32 |         | -5.8648033        | 5.9004812        | -0.0723137     | 0.4699377             | torch.Size([12, 128, 64, 176])   |
| 10      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.0.act                     | input               | torch.float32 |         | -5.8648033        | 5.9004812        | -0.0723137     | 0.4699377             | torch.Size([12, 128, 64, 176])   |
| 10      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.0.act                     | output              | torch.float32 |         | -0.1699712        | 5.9004812        | 0.1135490      | 0.1224564             | torch.Size([12, 128, 64, 176])   |
| 11      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 5.9004812        | 0.1135490      | 0.1224564             | torch.Size([12, 128, 64, 176])   |
| 11      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv2                 | weight              | torch.float32 |         | -0.2399586        | 0.2694110        | 0.0037713      | 0.0049621             | torch.Size([64, 128, 1, 1])      |
| 11      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv2                 | bias                | torch.float32 |         | -0.7430673        | 0.4469467        | 0.0274964      | 0.0508755             | torch.Size([64])                 |
| 11      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.0.pwconv2                 | output              | torch.float32 |         | -3.1481645        | 4.2434359        | 0.0497029      | 0.2618269             | torch.Size([12, 64, 64, 176])    |
| 12      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.0.layer_scale             | input               | torch.float32 |         | -3.1481645        | 4.2434359        | 0.0497029      | 0.2618269             | torch.Size([12, 64, 64, 176])    |
| 12      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.0.layer_scale             | weight              | torch.float32 |         | 0.5554022         | 1.0303934        | 0.8849150      | 0.0080014             | torch.Size([64])                 |
| 12      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.0.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 12      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.0.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 12      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.0.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 12      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.0.layer_scale             | output              | torch.float32 |         | -3.2213342        | 4.3209591        | 0.0482979      | 0.2154846             | torch.Size([12, 64, 64, 176])    |
| 13      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.0.add                     | input_0             | torch.float32 |         | 0.0000000         | 6.5428586        | 0.2560138      | 0.1094660             | torch.Size([12, 64, 64, 176])    |
| 13      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.0.add                     | input_1             | torch.float32 |         | -3.2213342        | 4.3209591        | 0.0482979      | 0.2154846             | torch.Size([12, 64, 64, 176])    |
| 13      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.0.add                     | output              | torch.float32 |         | -3.2213342        | 7.9977160        | 0.3043116      | 0.3855745             | torch.Size([12, 64, 64, 176])    |
| 14      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.0.extra_act               | input               | torch.float32 |         | -3.2213342        | 7.9977160        | 0.3043116      | 0.3855745             | torch.Size([12, 64, 64, 176])    |
| 14      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.0.extra_act               | output              | torch.float32 |         | -3.2213342        | 7.9977160        | 0.3043116      | 0.3855745             | torch.Size([12, 64, 64, 176])    |
| 15      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.dwconv.0                | input               | torch.float32 |         | -3.2213342        | 7.9977160        | 0.3043116      | 0.3855745             | torch.Size([12, 64, 64, 176])    |
| 15      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.dwconv.0                | weight              | torch.float32 |         | -0.7311579        | 0.4880761        | -0.0041821     | 0.0403801             | torch.Size([64, 1, 3, 3])        |
| 15      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.dwconv.0                | bias                | torch.float32 |         | -0.4033075        | 0.3360669        | -0.0358682     | 0.0398612             | torch.Size([64])                 |
| 15      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.dwconv.0                | output              | torch.float32 |         | -5.9545064        | 6.3149514        | -0.0827875     | 0.2667516             | torch.Size([12, 64, 64, 176])    |
| 16      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.1.dwconv.1                | input               | torch.float32 |         | -5.9545064        | 6.3149514        | -0.0827875     | 0.2667516             | torch.Size([12, 64, 64, 176])    |
| 16      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.1.dwconv.1                | weight              | torch.float32 |         | 0.7833908         | 1.2510334        | 1.0048257      | 0.0078857             | torch.Size([64])                 |
| 16      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.1.dwconv.1                | bias                | torch.float32 |         | -0.2147512        | 0.2376671        | 0.0183725      | 0.0097866             | torch.Size([64])                 |
| 16      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.1.dwconv.1                | running_mean        | torch.float32 |         | -1.6460904        | 1.0457867        | -0.0779932     | 0.2079995             | torch.Size([64])                 |
| 16      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.1.dwconv.1                | running_var         | torch.float32 |         | 0.0076346         | 0.5963194        | 0.1231261      | 0.0165156             | torch.Size([64])                 |
| 16      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.1.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 16      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.1.dwconv.1                | output              | torch.float32 |         | -12.2285471       | 12.0275965       | -0.0072141     | 0.8256057             | torch.Size([12, 64, 64, 176])    |
| 17      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv1                 | input               | torch.float32 |         | -12.2285471       | 12.0275965       | -0.0072141     | 0.8256057             | torch.Size([12, 64, 64, 176])    |
| 17      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv1                 | weight              | torch.float32 |         | -0.6521709        | 0.5402265        | -0.0032623     | 0.0090059             | torch.Size([128, 64, 1, 1])      |
| 17      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv1                 | bias                | torch.float32 |         | -0.2690851        | 0.2633894        | -0.0974847     | 0.0095327             | torch.Size([128])                |
| 17      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv1                 | output              | torch.float32 |         | -9.3086166        | 8.6913538        | -0.1145967     | 0.6072676             | torch.Size([12, 128, 64, 176])   |
| 18      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.1.act                     | input               | torch.float32 |         | -9.3086166        | 8.6913538        | -0.1145967     | 0.6072676             | torch.Size([12, 128, 64, 176])   |
| 18      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.1.act                     | output              | torch.float32 |         | -0.1699712        | 8.6913538        | 0.1256504      | 0.1509490             | torch.Size([12, 128, 64, 176])   |
| 19      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 8.6913538        | 0.1256504      | 0.1509490             | torch.Size([12, 128, 64, 176])   |
| 19      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv2                 | weight              | torch.float32 |         | -0.2852968        | 0.2860411        | 0.0016800      | 0.0056762             | torch.Size([64, 128, 1, 1])      |
| 19      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv2                 | bias                | torch.float32 |         | -0.3551933        | 0.5310498        | 0.0408388      | 0.0311848             | torch.Size([64])                 |
| 19      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.1.pwconv2                 | output              | torch.float32 |         | -3.7739112        | 4.1013713        | 0.0675470      | 0.2223622             | torch.Size([12, 64, 64, 176])    |
| 20      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.1.layer_scale             | input               | torch.float32 |         | -3.7739112        | 4.1013713        | 0.0675470      | 0.2223622             | torch.Size([12, 64, 64, 176])    |
| 20      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.1.layer_scale             | weight              | torch.float32 |         | 0.8179380         | 1.1042907        | 0.9566889      | 0.0036308             | torch.Size([64])                 |
| 20      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.1.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 20      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.1.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 20      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.1.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 20      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.1.layer_scale             | output              | torch.float32 |         | -3.6709585        | 4.1758065        | 0.0673691      | 0.2063706             | torch.Size([12, 64, 64, 176])    |
| 21      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.1.add                     | input_0             | torch.float32 |         | -3.2213342        | 7.9977160        | 0.3043116      | 0.3855745             | torch.Size([12, 64, 64, 176])    |
| 21      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.1.add                     | input_1             | torch.float32 |         | -3.6709585        | 4.1758065        | 0.0673691      | 0.2063706             | torch.Size([12, 64, 64, 176])    |
| 21      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.1.add                     | output              | torch.float32 |         | -5.1103468        | 8.7223406        | 0.3716807      | 0.7534573             | torch.Size([12, 64, 64, 176])    |
| 22      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.1.extra_act               | input               | torch.float32 |         | -5.1103468        | 8.7223406        | 0.3716807      | 0.7534573             | torch.Size([12, 64, 64, 176])    |
| 22      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.1.extra_act               | output              | torch.float32 |         | -5.1103468        | 8.7223406        | 0.3716807      | 0.7534573             | torch.Size([12, 64, 64, 176])    |
| 23      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.dwconv.0                | input               | torch.float32 |         | -5.1103468        | 8.7223406        | 0.3716807      | 0.7534573             | torch.Size([12, 64, 64, 176])    |
| 23      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.dwconv.0                | weight              | torch.float32 |         | -0.5433433        | 0.5054607        | -0.0009662     | 0.0400869             | torch.Size([64, 1, 3, 3])        |
| 23      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.dwconv.0                | bias                | torch.float32 |         | -0.3585573        | 0.3141512        | -0.0436077     | 0.0391571             | torch.Size([64])                 |
| 23      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.dwconv.0                | output              | torch.float32 |         | -6.1176944        | 4.8560796        | -0.0046447     | 0.3118336             | torch.Size([12, 64, 64, 176])    |
| 24      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.2.dwconv.1                | input               | torch.float32 |         | -6.1176944        | 4.8560796        | -0.0046447     | 0.3118336             | torch.Size([12, 64, 64, 176])    |
| 24      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.2.dwconv.1                | weight              | torch.float32 |         | 0.8924315         | 1.2950463        | 1.0563384      | 0.0085405             | torch.Size([64])                 |
| 24      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.2.dwconv.1                | bias                | torch.float32 |         | -0.2999517        | 0.3675122        | 0.0163211      | 0.0185360             | torch.Size([64])                 |
| 24      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.2.dwconv.1                | running_mean        | torch.float32 |         | -1.0247276        | 1.1156051        | -0.0063502     | 0.1516759             | torch.Size([64])                 |
| 24      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.2.dwconv.1                | running_var         | torch.float32 |         | 0.0243191         | 0.6993428        | 0.1644716      | 0.0156216             | torch.Size([64])                 |
| 24      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.2.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 24      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.2.dwconv.1                | output              | torch.float32 |         | -12.0774832       | 11.0853281       | 0.0068069      | 1.0897411             | torch.Size([12, 64, 64, 176])    |
| 25      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv1                 | input               | torch.float32 |         | -12.0774832       | 11.0853281       | 0.0068069      | 1.0897411             | torch.Size([12, 64, 64, 176])    |
| 25      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv1                 | weight              | torch.float32 |         | -0.5646603        | 0.4319130        | -0.0004717     | 0.0100047             | torch.Size([128, 64, 1, 1])      |
| 25      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv1                 | bias                | torch.float32 |         | -0.3360012        | 0.1481226        | -0.1170983     | 0.0094382             | torch.Size([128])                |
| 25      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv1                 | output              | torch.float32 |         | -11.4690695       | 11.0188255       | -0.2396717     | 1.0384570             | torch.Size([12, 128, 64, 176])   |
| 26      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.2.act                     | input               | torch.float32 |         | -11.4690695       | 11.0188255       | -0.2396717     | 1.0384570             | torch.Size([12, 128, 64, 176])   |
| 26      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.2.act                     | output              | torch.float32 |         | -0.1699712        | 11.0188255       | 0.1575298      | 0.2728781             | torch.Size([12, 128, 64, 176])   |
| 27      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 11.0188255       | 0.1575298      | 0.2728781             | torch.Size([12, 128, 64, 176])   |
| 27      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv2                 | weight              | torch.float32 |         | -0.2933576        | 0.2926795        | 0.0028922      | 0.0066214             | torch.Size([64, 128, 1, 1])      |
| 27      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv2                 | bias                | torch.float32 |         | -0.3788261        | 0.3632554        | 0.0426626      | 0.0264571             | torch.Size([64])                 |
| 27      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.2.pwconv2                 | output              | torch.float32 |         | -7.8798804        | 8.5603056        | 0.1173339      | 0.4262929             | torch.Size([12, 64, 64, 176])    |
| 28      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.2.layer_scale             | input               | torch.float32 |         | -7.8798804        | 8.5603056        | 0.1173339      | 0.4262929             | torch.Size([12, 64, 64, 176])    |
| 28      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.2.layer_scale             | weight              | torch.float32 |         | 0.8596543         | 1.1793767        | 1.0263124      | 0.0044708             | torch.Size([64])                 |
| 28      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.2.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 28      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.2.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 28      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.2.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 28      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.2.layer_scale             | output              | torch.float32 |         | -8.1918240        | 8.9021053        | 0.1252366      | 0.4581395             | torch.Size([12, 64, 64, 176])    |
| 29      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.2.add                     | input_0             | torch.float32 |         | -5.1103468        | 8.7223406        | 0.3716807      | 0.7534573             | torch.Size([12, 64, 64, 176])    |
| 29      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.2.add                     | input_1             | torch.float32 |         | -8.1918240        | 8.9021053        | 0.1252366      | 0.4581395             | torch.Size([12, 64, 64, 176])    |
| 29      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.2.add                     | output              | torch.float32 |         | -8.0884552        | 11.0042400       | 0.4969173      | 1.3558917             | torch.Size([12, 64, 64, 176])    |
| 30      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.2.extra_act               | input               | torch.float32 |         | -8.0884552        | 11.0042400       | 0.4969173      | 1.3558917             | torch.Size([12, 64, 64, 176])    |
| 30      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.2.extra_act               | output              | torch.float32 |         | -8.0884552        | 11.0042400       | 0.4969173      | 1.3558917             | torch.Size([12, 64, 64, 176])    |
| 31      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.dwconv.0                | input               | torch.float32 |         | -8.0884552        | 11.0042400       | 0.4969173      | 1.3558917             | torch.Size([12, 64, 64, 176])    |
| 31      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.dwconv.0                | weight              | torch.float32 |         | -0.5259567        | 0.5147526        | 0.0131518      | 0.0407158             | torch.Size([64, 1, 3, 3])        |
| 31      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.dwconv.0                | bias                | torch.float32 |         | -0.3852782        | 0.3410209        | -0.0542522     | 0.0441050             | torch.Size([64])                 |
| 31      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.dwconv.0                | output              | torch.float32 |         | -8.9643564        | 10.4960585       | -0.0382693     | 0.9231403             | torch.Size([12, 64, 64, 176])    |
| 32      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.3.dwconv.1                | input               | torch.float32 |         | -8.9643564        | 10.4960585       | -0.0382693     | 0.9231403             | torch.Size([12, 64, 64, 176])    |
| 32      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.3.dwconv.1                | weight              | torch.float32 |         | 0.9186531         | 1.2421800        | 1.0688386      | 0.0061103             | torch.Size([64])                 |
| 32      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.3.dwconv.1                | bias                | torch.float32 |         | -0.3413289        | 0.3570944        | 0.0074532      | 0.0175383             | torch.Size([64])                 |
| 32      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.3.dwconv.1                | running_mean        | torch.float32 |         | -1.7984624        | 2.0937774        | -0.0303710     | 0.5097538             | torch.Size([64])                 |
| 32      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.3.dwconv.1                | running_var         | torch.float32 |         | 0.1059927         | 2.7807817        | 0.4156391      | 0.1717573             | torch.Size([64])                 |
| 32      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.3.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 32      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.0.block.3.dwconv.1                | output              | torch.float32 |         | -10.3421478       | 10.5012283       | -0.0005058     | 1.2001146             | torch.Size([12, 64, 64, 176])    |
| 33      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv1                 | input               | torch.float32 |         | -10.3421478       | 10.5012283       | -0.0005058     | 1.2001146             | torch.Size([12, 64, 64, 176])    |
| 33      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv1                 | weight              | torch.float32 |         | -0.3884685        | 0.4816757        | -0.0017783     | 0.0104277             | torch.Size([128, 64, 1, 1])      |
| 33      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv1                 | bias                | torch.float32 |         | -0.3522463        | 0.0540238        | -0.1331877     | 0.0076328             | torch.Size([128])                |
| 33      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv1                 | output              | torch.float32 |         | -11.9766855       | 9.7347450        | -0.2529777     | 1.1106136             | torch.Size([12, 128, 64, 176])   |
| 34      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.3.act                     | input               | torch.float32 |         | -11.9766855       | 9.7347450        | -0.2529777     | 1.1106136             | torch.Size([12, 128, 64, 176])   |
| 34      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.0.block.3.act                     | output              | torch.float32 |         | -0.1699712        | 9.7347450        | 0.1700028      | 0.2692199             | torch.Size([12, 128, 64, 176])   |
| 35      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 9.7347450        | 0.1700028      | 0.2692199             | torch.Size([12, 128, 64, 176])   |
| 35      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv2                 | weight              | torch.float32 |         | -0.2933251        | 0.2856078        | 0.0007213      | 0.0074106             | torch.Size([64, 128, 1, 1])      |
| 35      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv2                 | bias                | torch.float32 |         | -0.2142833        | 0.1786083        | -0.0018402     | 0.0062158             | torch.Size([64])                 |
| 35      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.0.block.3.pwconv2                 | output              | torch.float32 |         | -8.6031961        | 6.0596399        | 0.0383233      | 0.4210027             | torch.Size([12, 64, 64, 176])    |
| 36      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.3.layer_scale             | input               | torch.float32 |         | -8.6031961        | 6.0596399        | 0.0383233      | 0.4210027             | torch.Size([12, 64, 64, 176])    |
| 36      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.3.layer_scale             | weight              | torch.float32 |         | 0.8770425         | 1.2184547        | 1.0589550      | 0.0039925             | torch.Size([64])                 |
| 36      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.3.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 36      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.3.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 36      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.3.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 36      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.0.block.3.layer_scale             | output              | torch.float32 |         | -9.9082165        | 6.8577213        | 0.0373310      | 0.4842033             | torch.Size([12, 64, 64, 176])    |
| 37      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.3.add                     | input_0             | torch.float32 |         | -8.0884552        | 11.0042400       | 0.4969173      | 1.3558917             | torch.Size([12, 64, 64, 176])    |
| 37      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.3.add                     | input_1             | torch.float32 |         | -9.9082165        | 6.8577213        | 0.0373310      | 0.4842033             | torch.Size([12, 64, 64, 176])    |
| 37      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.0.block.3.add                     | output              | torch.float32 |         | -11.2078342       | 11.5802841       | 0.5342482      | 1.9201154             | torch.Size([12, 64, 64, 176])    |
| 38      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.3.extra_act               | input               | torch.float32 |         | -11.2078342       | 11.5802841       | 0.5342482      | 1.9201154             | torch.Size([12, 64, 64, 176])    |
| 38      | torch.nn.modules.linear.Identity                                                  | backbone.stages.0.block.3.extra_act               | output              | torch.float32 |         | -11.2078342       | 11.5802841       | 0.5342482      | 1.9201154             | torch.Size([12, 64, 64, 176])    |
| 39      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.0                             | input               | torch.float32 |         | -11.2078342       | 11.5802841       | 0.5342482      | 1.9201154             | torch.Size([12, 64, 64, 176])    |
| 39      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.0                             | weight              | torch.float32 |         | 0.0117894         | 0.2205137        | 0.0516751      | 0.0024542             | torch.Size([64])                 |
| 39      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.0                             | bias                | torch.float32 |         | -0.0642388        | 0.1026925        | 0.0011650      | 0.0004492             | torch.Size([64])                 |
| 39      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.0                             | running_mean        | torch.float32 |         | -1.0947821        | 2.1904764        | 0.6345965      | 0.4023141             | torch.Size([64])                 |
| 39      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.0                             | running_var         | torch.float32 |         | 0.7230649         | 3.3071532        | 1.4103394      | 0.2920620             | torch.Size([64])                 |
| 39      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.0                             | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 39      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.0                             | output              | torch.float32 |         | -1.3446839        | 1.5299978        | -0.0022292     | 0.0058741             | torch.Size([12, 64, 64, 176])    |
| 40      | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | backbone.up                                       | input               | torch.float32 |         | -1.3446839        | 1.5299978        | -0.0022292     | 0.0058741             | torch.Size([12, 64, 64, 176])    |
| 40      | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | backbone.up                                       | output              | torch.float32 |         | -1.1775608        | 1.1897231        | -0.0022292     | 0.0045502             | torch.Size([12, 64, 128, 352])   |
| 41      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.0.proj.0                | input               | torch.float32 |         | -11.2078342       | 11.5802841       | 0.5342482      | 1.9201154             | torch.Size([12, 64, 64, 176])    |
| 41      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.0.proj.0                | weight              | torch.float32 |         | -0.3343122        | 0.3368192        | -0.0000681     | 0.0047385             | torch.Size([128, 64, 2, 2])      |
| 41      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.0.proj.0                | bias                | torch.float32 |         | -0.1131299        | 0.1289503        | -0.0014777     | 0.0027998             | torch.Size([128])                |
| 41      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.0.proj.0                | output              | torch.float32 |         | -28.5822029       | 29.8500137       | 0.1368564      | 17.0544853            | torch.Size([12, 128, 32, 88])    |
| 42      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.0.proj.1                | input               | torch.float32 |         | -28.5822029       | 29.8500137       | 0.1368564      | 17.0544853            | torch.Size([12, 128, 32, 88])    |
| 42      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.0.proj.1                | weight              | torch.float32 |         | 0.8001742         | 1.0899880        | 0.9382904      | 0.0040341             | torch.Size([128])                |
| 42      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.0.proj.1                | bias                | torch.float32 |         | -0.4081046        | 0.4730853        | 0.0097137      | 0.0370465             | torch.Size([128])                |
| 42      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.0.proj.1                | running_mean        | torch.float32 |         | -6.2579036        | 4.6049733        | 0.1222483      | 5.0232015             | torch.Size([128])                |
| 42      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.0.proj.1                | running_var         | torch.float32 |         | 5.9457178         | 20.0781212       | 11.6376610     | 9.4397821             | torch.Size([128])                |
| 42      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.0.proj.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 42      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.0.proj.1                | output              | torch.float32 |         | -7.9715323        | 8.7248001        | 0.0117241      | 1.0675069             | torch.Size([12, 128, 32, 88])    |
| 43      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.dwconv.0                | input               | torch.float32 |         | -7.9715323        | 8.7248001        | 0.0117241      | 1.0675069             | torch.Size([12, 128, 32, 88])    |
| 43      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.dwconv.0                | weight              | torch.float32 |         | -0.5160743        | 0.5215851        | 0.0032022      | 0.0402726             | torch.Size([128, 1, 3, 3])       |
| 43      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.dwconv.0                | bias                | torch.float32 |         | -0.3495312        | 0.4128329        | 0.0278364      | 0.0375477             | torch.Size([128])                |
| 43      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.dwconv.0                | output              | torch.float32 |         | -7.2662549        | 6.9724827        | 0.0357720      | 0.3783083             | torch.Size([12, 128, 32, 88])    |
| 44      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.0.dwconv.1                | input               | torch.float32 |         | -7.2662549        | 6.9724827        | 0.0357720      | 0.3783083             | torch.Size([12, 128, 32, 88])    |
| 44      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.0.dwconv.1                | weight              | torch.float32 |         | 0.8212658         | 1.2122475        | 1.0043330      | 0.0053085             | torch.Size([128])                |
| 44      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.0.dwconv.1                | bias                | torch.float32 |         | -0.3098888        | 0.3333715        | -0.0058013     | 0.0206467             | torch.Size([128])                |
| 44      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.0.dwconv.1                | running_mean        | torch.float32 |         | -0.6786640        | 0.5448785        | 0.0257835      | 0.0506862             | torch.Size([128])                |
| 44      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.0.dwconv.1                | running_var         | torch.float32 |         | 0.0489349         | 1.5157744        | 0.2859935      | 0.0646138             | torch.Size([128])                |
| 44      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.0.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 44      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.0.dwconv.1                | output              | torch.float32 |         | -8.8320112        | 9.8424015        | 0.0071300      | 1.1845382             | torch.Size([12, 128, 32, 88])    |
| 45      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv1                 | input               | torch.float32 |         | -8.8320112        | 9.8424015        | 0.0071300      | 1.1845382             | torch.Size([12, 128, 32, 88])    |
| 45      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv1                 | weight              | torch.float32 |         | -0.3914456        | 0.4410824        | 0.0011752      | 0.0094514             | torch.Size([256, 64, 1, 1])      |
| 45      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv1                 | bias                | torch.float32 |         | -0.3276729        | 0.0660501        | -0.1640906     | 0.0061752             | torch.Size([256])                |
| 45      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv1                 | output              | torch.float32 |         | -9.6345482        | 8.2519178        | -0.3241679     | 1.0663499             | torch.Size([12, 256, 32, 88])    |
| 46      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.1.block.0.act                     | input               | torch.float32 |         | -9.6345482        | 8.2519178        | -0.3241679     | 1.0663499             | torch.Size([12, 256, 32, 88])    |
| 46      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.1.block.0.act                     | output              | torch.float32 |         | -0.1699712        | 8.2519178        | 0.1366622      | 0.2420629             | torch.Size([12, 256, 32, 88])    |
| 47      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 8.2519178        | 0.1366622      | 0.2420629             | torch.Size([12, 256, 32, 88])    |
| 47      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv2                 | weight              | torch.float32 |         | -0.2841499        | 0.3206936        | 0.0000517      | 0.0050563             | torch.Size([128, 256, 1, 1])     |
| 47      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv2                 | bias                | torch.float32 |         | -0.5474290        | 0.3475759        | 0.0046501      | 0.0285004             | torch.Size([128])                |
| 47      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.0.pwconv2                 | output              | torch.float32 |         | -8.0137348        | 7.5445113        | 0.0186978      | 0.7137478             | torch.Size([12, 128, 32, 88])    |
| 48      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.0.layer_scale             | input               | torch.float32 |         | -8.0137348        | 7.5445113        | 0.0186978      | 0.7137478             | torch.Size([12, 128, 32, 88])    |
| 48      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.0.layer_scale             | weight              | torch.float32 |         | 0.7990863         | 1.1815653        | 0.9802990      | 0.0041775             | torch.Size([128])                |
| 48      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.0.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 48      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.0.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 48      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.0.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 48      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.0.layer_scale             | output              | torch.float32 |         | -8.4483290        | 8.2683887        | 0.0197118      | 0.7036498             | torch.Size([12, 128, 32, 88])    |
| 49      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.0.add                     | input_0             | torch.float32 |         | -7.9715323        | 8.7248001        | 0.0117241      | 1.0675069             | torch.Size([12, 128, 32, 88])    |
| 49      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.0.add                     | input_1             | torch.float32 |         | -8.4483290        | 8.2683887        | 0.0197118      | 0.7036498             | torch.Size([12, 128, 32, 88])    |
| 49      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.0.add                     | output              | torch.float32 |         | -9.7265892        | 10.1831341       | 0.0314358      | 1.9580249             | torch.Size([12, 128, 32, 88])    |
| 50      | torch.nn.modules.linear.Identity                                                  | backbone.stages.1.block.0.extra_act               | input               | torch.float32 |         | -9.7265892        | 10.1831341       | 0.0314358      | 1.9580249             | torch.Size([12, 128, 32, 88])    |
| 50      | torch.nn.modules.linear.Identity                                                  | backbone.stages.1.block.0.extra_act               | output              | torch.float32 |         | -9.7265892        | 10.1831341       | 0.0314358      | 1.9580249             | torch.Size([12, 128, 32, 88])    |
| 51      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.dwconv.0                | input               | torch.float32 |         | -9.7265892        | 10.1831341       | 0.0314358      | 1.9580249             | torch.Size([12, 128, 32, 88])    |
| 51      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.dwconv.0                | weight              | torch.float32 |         | -0.5110920        | 0.4765207        | -0.0082873     | 0.0411973             | torch.Size([128, 1, 3, 3])       |
| 51      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.dwconv.0                | bias                | torch.float32 |         | -0.3922721        | 0.3933981        | -0.0000211     | 0.0426198             | torch.Size([128])                |
| 51      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.dwconv.0                | output              | torch.float32 |         | -7.6271515        | 7.8362560        | -0.0220174     | 0.7147872             | torch.Size([12, 128, 32, 88])    |
| 52      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.1.dwconv.1                | input               | torch.float32 |         | -7.6271515        | 7.8362560        | -0.0220174     | 0.7147872             | torch.Size([12, 128, 32, 88])    |
| 52      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.1.dwconv.1                | weight              | torch.float32 |         | 0.7883746         | 1.2377831        | 1.0436901      | 0.0057963             | torch.Size([128])                |
| 52      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.1.dwconv.1                | bias                | torch.float32 |         | -0.4157436        | 0.3428188        | 0.0047059      | 0.0217261             | torch.Size([128])                |
| 52      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.1.dwconv.1                | running_mean        | torch.float32 |         | -0.7887210        | 1.2349954        | -0.0218956     | 0.1193164             | torch.Size([128])                |
| 52      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.1.dwconv.1                | running_var         | torch.float32 |         | 0.0991362         | 3.0391896        | 0.4625889      | 0.1486801             | torch.Size([128])                |
| 52      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.1.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 52      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.1.dwconv.1                | output              | torch.float32 |         | -9.9135418        | 9.3659973        | -0.0046225     | 1.3295255             | torch.Size([12, 128, 32, 88])    |
| 53      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv1                 | input               | torch.float32 |         | -9.9135418        | 9.3659973        | -0.0046225     | 1.3295255             | torch.Size([12, 128, 32, 88])    |
| 53      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv1                 | weight              | torch.float32 |         | -0.4112096        | 0.4213794        | 0.0000915      | 0.0102508             | torch.Size([256, 64, 1, 1])      |
| 53      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv1                 | bias                | torch.float32 |         | -0.3662765        | 0.0691123        | -0.1543788     | 0.0071230             | torch.Size([256])                |
| 53      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv1                 | output              | torch.float32 |         | -10.8570929       | 8.8517714        | -0.3987194     | 1.2278535             | torch.Size([12, 256, 32, 88])    |
| 54      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.1.block.1.act                     | input               | torch.float32 |         | -10.8570929       | 8.8517714        | -0.3987194     | 1.2278535             | torch.Size([12, 256, 32, 88])    |
| 54      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.1.block.1.act                     | output              | torch.float32 |         | -0.1699712        | 8.8517714        | 0.1327492      | 0.2429791             | torch.Size([12, 256, 32, 88])    |
| 55      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 8.8517714        | 0.1327492      | 0.2429791             | torch.Size([12, 256, 32, 88])    |
| 55      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv2                 | weight              | torch.float32 |         | -0.2858110        | 0.3134532        | 0.0007512      | 0.0056006             | torch.Size([128, 256, 1, 1])     |
| 55      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv2                 | bias                | torch.float32 |         | -0.3291121        | 0.3918854        | 0.0060355      | 0.0255167             | torch.Size([128])                |
| 55      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.1.pwconv2                 | output              | torch.float32 |         | -9.4529476        | 10.2834110       | 0.0292767      | 0.6601249             | torch.Size([12, 128, 32, 88])    |
| 56      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.1.layer_scale             | input               | torch.float32 |         | -9.4529476        | 10.2834110       | 0.0292767      | 0.6601249             | torch.Size([12, 128, 32, 88])    |
| 56      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.1.layer_scale             | weight              | torch.float32 |         | 0.8037558         | 1.2196577        | 1.0431898      | 0.0056854             | torch.Size([128])                |
| 56      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.1.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 56      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.1.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 56      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.1.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 56      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.1.layer_scale             | output              | torch.float32 |         | -11.2541733       | 11.0205135       | 0.0319909      | 0.7314849             | torch.Size([12, 128, 32, 88])    |
| 57      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.1.add                     | input_0             | torch.float32 |         | -9.7265892        | 10.1831341       | 0.0314358      | 1.9580249             | torch.Size([12, 128, 32, 88])    |
| 57      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.1.add                     | input_1             | torch.float32 |         | -11.2541733       | 11.0205135       | 0.0319909      | 0.7314849             | torch.Size([12, 128, 32, 88])    |
| 57      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.1.add                     | output              | torch.float32 |         | -13.2713337       | 14.0913467       | 0.0634267      | 2.9098144             | torch.Size([12, 128, 32, 88])    |
| 58      | torch.nn.modules.linear.Identity                                                  | backbone.stages.1.block.1.extra_act               | input               | torch.float32 |         | -13.2713337       | 14.0913467       | 0.0634267      | 2.9098144             | torch.Size([12, 128, 32, 88])    |
| 58      | torch.nn.modules.linear.Identity                                                  | backbone.stages.1.block.1.extra_act               | output              | torch.float32 |         | -13.2713337       | 14.0913467       | 0.0634267      | 2.9098144             | torch.Size([12, 128, 32, 88])    |
| 59      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.dwconv.0                | input               | torch.float32 |         | -13.2713337       | 14.0913467       | 0.0634267      | 2.9098144             | torch.Size([12, 128, 32, 88])    |
| 59      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.dwconv.0                | weight              | torch.float32 |         | -0.5255685        | 0.4571402        | -0.0068278     | 0.0410900             | torch.Size([128, 1, 3, 3])       |
| 59      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.dwconv.0                | bias                | torch.float32 |         | -0.3953146        | 0.3303456        | -0.0121198     | 0.0376768             | torch.Size([128])                |
| 59      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.dwconv.0                | output              | torch.float32 |         | -13.2133131       | 10.4999390       | -0.0427625     | 1.2604481             | torch.Size([12, 128, 32, 88])    |
| 60      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.2.dwconv.1                | input               | torch.float32 |         | -13.2133131       | 10.4999390       | -0.0427625     | 1.2604481             | torch.Size([12, 128, 32, 88])    |
| 60      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.2.dwconv.1                | weight              | torch.float32 |         | 0.9168813         | 1.2183514        | 1.0397336      | 0.0039987             | torch.Size([128])                |
| 60      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.2.dwconv.1                | bias                | torch.float32 |         | -0.3731218        | 0.2898483        | -0.0002154     | 0.0161112             | torch.Size([128])                |
| 60      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.2.dwconv.1                | running_mean        | torch.float32 |         | -1.6668308        | 1.6687574        | -0.0316023     | 0.2187980             | torch.Size([128])                |
| 60      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.2.dwconv.1                | running_var         | torch.float32 |         | 0.1669743         | 4.3142428        | 0.7901220      | 0.4783144             | torch.Size([128])                |
| 60      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.2.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 60      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.1.block.2.dwconv.1                | output              | torch.float32 |         | -10.0854301       | 9.4986897        | -0.0018240     | 1.3157402             | torch.Size([12, 128, 32, 88])    |
| 61      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv1                 | input               | torch.float32 |         | -10.0854301       | 9.4986897        | -0.0018240     | 1.3157402             | torch.Size([12, 128, 32, 88])    |
| 61      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv1                 | weight              | torch.float32 |         | -0.4869311        | 0.3686527        | 0.0017785      | 0.0100755             | torch.Size([256, 64, 1, 1])      |
| 61      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv1                 | bias                | torch.float32 |         | -0.3687980        | 0.1828431        | -0.1405843     | 0.0078767             | torch.Size([256])                |
| 61      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv1                 | output              | torch.float32 |         | -12.4774342       | 9.3698997        | -0.3531630     | 1.0951368             | torch.Size([12, 256, 32, 88])    |
| 62      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.1.block.2.act                     | input               | torch.float32 |         | -12.4774342       | 9.3698997        | -0.3531630     | 1.0951368             | torch.Size([12, 256, 32, 88])    |
| 62      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.1.block.2.act                     | output              | torch.float32 |         | -0.1699712        | 9.3698997        | 0.1330555      | 0.2262942             | torch.Size([12, 256, 32, 88])    |
| 63      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 9.3698997        | 0.1330555      | 0.2262942             | torch.Size([12, 256, 32, 88])    |
| 63      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv2                 | weight              | torch.float32 |         | -0.3071488        | 0.3175278        | -0.0002617     | 0.0056914             | torch.Size([128, 256, 1, 1])     |
| 63      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv2                 | bias                | torch.float32 |         | -0.1533142        | 0.1873277        | 0.0020024      | 0.0032971             | torch.Size([128])                |
| 63      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.1.block.2.pwconv2                 | output              | torch.float32 |         | -7.5177140        | 7.4773660        | -0.0084651     | 0.6482680             | torch.Size([12, 128, 32, 88])    |
| 64      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.2.layer_scale             | input               | torch.float32 |         | -7.5177140        | 7.4773660        | -0.0084651     | 0.6482680             | torch.Size([12, 128, 32, 88])    |
| 64      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.2.layer_scale             | weight              | torch.float32 |         | 0.8033240         | 1.2754916        | 1.0460650      | 0.0084042             | torch.Size([128])                |
| 64      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.2.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 64      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.2.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 64      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.2.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 64      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.1.block.2.layer_scale             | output              | torch.float32 |         | -7.7891226        | 8.3233395        | -0.0057963     | 0.7306582             | torch.Size([12, 128, 32, 88])    |
| 65      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.2.add                     | input_0             | torch.float32 |         | -13.2713337       | 14.0913467       | 0.0634267      | 2.9098144             | torch.Size([12, 128, 32, 88])    |
| 65      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.2.add                     | input_1             | torch.float32 |         | -7.7891226        | 8.3233395        | -0.0057963     | 0.7306582             | torch.Size([12, 128, 32, 88])    |
| 65      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.1.block.2.add                     | output              | torch.float32 |         | -14.0580425       | 14.8416958       | 0.0576305      | 4.0002923             | torch.Size([12, 128, 32, 88])    |
| 66      | torch.nn.modules.linear.Identity                                                  | backbone.stages.1.block.2.extra_act               | input               | torch.float32 |         | -14.0580425       | 14.8416958       | 0.0576305      | 4.0002923             | torch.Size([12, 128, 32, 88])    |
| 66      | torch.nn.modules.linear.Identity                                                  | backbone.stages.1.block.2.extra_act               | output              | torch.float32 |         | -14.0580425       | 14.8416958       | 0.0576305      | 4.0002923             | torch.Size([12, 128, 32, 88])    |
| 67      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.1                             | input               | torch.float32 |         | -14.0580425       | 14.8416958       | 0.0576305      | 4.0002923             | torch.Size([12, 128, 32, 88])    |
| 67      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.1                             | weight              | torch.float32 |         | 0.0262112         | 0.7064559        | 0.2382833      | 0.0347776             | torch.Size([128])                |
| 67      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.1                             | bias                | torch.float32 |         | -0.0895434        | 0.0831020        | -0.0037414     | 0.0007803             | torch.Size([128])                |
| 67      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.1                             | running_mean        | torch.float32 |         | -2.1813064        | 1.4853035        | 0.0402109      | 0.6425784             | torch.Size([128])                |
| 67      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.1                             | running_var         | torch.float32 |         | 1.3674419         | 7.8914661        | 2.6385241      | 0.5175550             | torch.Size([128])                |
| 67      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.1                             | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 67      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.1                             | output              | torch.float32 |         | -3.8015001        | 4.0431347        | -0.0035960     | 0.1104387             | torch.Size([12, 128, 32, 88])    |
| 68      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.1.proj.0                | input               | torch.float32 |         | -14.0580425       | 14.8416958       | 0.0576305      | 4.0002923             | torch.Size([12, 128, 32, 88])    |
| 68      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.1.proj.0                | weight              | torch.float32 |         | -0.2912754        | 0.2950233        | -0.0004733     | 0.0042079             | torch.Size([192, 128, 2, 2])     |
| 68      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.1.proj.0                | bias                | torch.float32 |         | -0.1640473        | 0.1622401        | 0.0089632      | 0.0031715             | torch.Size([192])                |
| 68      | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.1.proj.0                | output              | torch.float32 |         | -62.4874763       | 58.5273285       | -0.3619579     | 91.2039871            | torch.Size([12, 192, 16, 44])    |
| 69      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.1.proj.1                | input               | torch.float32 |         | -62.4874763       | 58.5273285       | -0.3619579     | 91.2039871            | torch.Size([12, 192, 16, 44])    |
| 69      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.1.proj.1                | weight              | torch.float32 |         | 0.8074346         | 1.2409964        | 0.9845023      | 0.0063342             | torch.Size([192])                |
| 69      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.1.proj.1                | bias                | torch.float32 |         | -0.5023183        | 0.4285513        | -0.0050163     | 0.0310653             | torch.Size([192])                |
| 69      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.1.proj.1                | running_mean        | torch.float32 |         | -9.1380796        | 7.2710991        | -0.2229784     | 11.0640354            | torch.Size([192])                |
| 69      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.1.proj.1                | running_var         | torch.float32 |         | 29.5741920        | 106.4489288      | 60.9895935     | 203.1698303           | torch.Size([192])                |
| 69      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.1.proj.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 69      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.1.proj.1                | output              | torch.float32 |         | -8.2101498        | 6.7952056        | -0.0228992     | 1.2734358             | torch.Size([12, 192, 16, 44])    |
| 70      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.dwconv.0.0              | input               | torch.float32 |         | -8.2101498        | 6.7952056        | -0.0228992     | 1.2734358             | torch.Size([12, 192, 16, 44])    |
| 70      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.dwconv.0.0              | weight              | torch.float32 |         | -0.5509536        | 0.5457276        | -0.0121640     | 0.0662187             | torch.Size([192, 1, 1, 5])       |
| 70      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.dwconv.0.0              | bias                | torch.float32 |         | -0.5398935        | 0.4931325        | -0.0257959     | 0.0740800             | torch.Size([192])                |
| 70      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.dwconv.0.0              | output              | torch.float32 |         | -6.3208413        | 8.6473198        | -0.0293224     | 0.4024470             | torch.Size([12, 192, 16, 44])    |
| 71      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.0.dwconv.0.1              | input               | torch.float32 |         | -6.3208413        | 8.6473198        | -0.0293224     | 0.4024470             | torch.Size([12, 192, 16, 44])    |
| 71      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.0.dwconv.0.1              | weight              | torch.float32 |         | 0.6165341         | 1.1494366        | 0.8287810      | 0.0084936             | torch.Size([192])                |
| 71      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.0.dwconv.0.1              | bias                | torch.float32 |         | -0.6259901        | 0.4929664        | 0.0111829      | 0.0520395             | torch.Size([192])                |
| 71      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.0.dwconv.0.1              | running_mean        | torch.float32 |         | -0.5873914        | 0.6346226        | -0.0260049     | 0.0814427             | torch.Size([192])                |
| 71      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.0.dwconv.0.1              | running_var         | torch.float32 |         | 0.0270270         | 1.6913306        | 0.2606041      | 0.0767992             | torch.Size([192])                |
| 71      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.0.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 71      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.0.dwconv.0.1              | output              | torch.float32 |         | -9.5691824        | 10.6824083       | 0.0102550      | 0.8811500             | torch.Size([12, 192, 16, 44])    |
| 72      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv1.0               | input               | torch.float32 |         | -9.5691824        | 10.6824083       | 0.0102550      | 0.8811500             | torch.Size([12, 192, 16, 44])    |
| 72      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv1.0               | weight              | torch.float32 |         | -0.3090113        | 0.3081646        | -0.0006261     | 0.0043692             | torch.Size([384, 192, 1, 1])     |
| 72      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv1.0               | bias                | torch.float32 |         | -0.3488774        | 0.0704211        | -0.1746289     | 0.0044114             | torch.Size([384])                |
| 72      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv1.0               | output              | torch.float32 |         | -13.4355392       | 8.7262135        | -0.8314573     | 1.6282815             | torch.Size([12, 384, 16, 44])    |
| 73      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.0.pwconv1.1               | input               | torch.float32 |         | -13.4355392       | 8.7262135        | -0.8314573     | 1.6282815             | torch.Size([12, 384, 16, 44])    |
| 73      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.0.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 8.7262135        | 0.0692095      | 0.1964265             | torch.Size([12, 384, 16, 44])    |
| 74      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 8.7262135        | 0.0692095      | 0.1964265             | torch.Size([12, 384, 16, 44])    |
| 74      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv2                 | weight              | torch.float32 |         | -0.2637973        | 0.2421057        | -0.0004276     | 0.0037126             | torch.Size([192, 384, 1, 1])     |
| 74      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv2                 | bias                | torch.float32 |         | -0.3787431        | 0.3524798        | -0.0023382     | 0.0178199             | torch.Size([192])                |
| 74      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.0.pwconv2                 | output              | torch.float32 |         | -8.9634094        | 8.2316313        | -0.0108167     | 0.7077615             | torch.Size([12, 192, 16, 44])    |
| 75      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.0.layer_scale             | input               | torch.float32 |         | -8.9634094        | 8.2316313        | -0.0108167     | 0.7077615             | torch.Size([12, 192, 16, 44])    |
| 75      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.0.layer_scale             | weight              | torch.float32 |         | 0.7076299         | 1.0087876        | 0.8550977      | 0.0053117             | torch.Size([192])                |
| 75      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.0.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 75      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.0.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 75      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.0.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 75      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.0.layer_scale             | output              | torch.float32 |         | -8.7075453        | 7.6226978        | -0.0103368     | 0.5437603             | torch.Size([12, 192, 16, 44])    |
| 76      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.0.add                     | input_0             | torch.float32 |         | -8.2101498        | 6.7952056        | -0.0228992     | 1.2734358             | torch.Size([12, 192, 16, 44])    |
| 76      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.0.add                     | input_1             | torch.float32 |         | -8.7075453        | 7.6226978        | -0.0103368     | 0.5437603             | torch.Size([12, 192, 16, 44])    |
| 76      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.0.add                     | output              | torch.float32 |         | -10.3597164       | 10.3309383       | -0.0332360     | 1.9369709             | torch.Size([12, 192, 16, 44])    |
| 77      | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.0.extra_act               | input               | torch.float32 |         | -10.3597164       | 10.3309383       | -0.0332360     | 1.9369709             | torch.Size([12, 192, 16, 44])    |
| 77      | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.0.extra_act               | output              | torch.float32 |         | -10.3597164       | 10.3309383       | -0.0332360     | 1.9369709             | torch.Size([12, 192, 16, 44])    |
| 78      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.dwconv.0.0              | input               | torch.float32 |         | -10.3597164       | 10.3309383       | -0.0332360     | 1.9369709             | torch.Size([12, 192, 16, 44])    |
| 78      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.dwconv.0.0              | weight              | torch.float32 |         | -0.6517577        | 0.5477187        | -0.0110212     | 0.0703771             | torch.Size([192, 1, 5, 1])       |
| 78      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.dwconv.0.0              | bias                | torch.float32 |         | -0.5211562        | 0.5242157        | 0.0093156      | 0.0673379             | torch.Size([192])                |
| 78      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.dwconv.0.0              | output              | torch.float32 |         | -6.9452815        | 8.5885248        | -0.0111084     | 0.6473768             | torch.Size([12, 192, 16, 44])    |
| 79      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.1.dwconv.0.1              | input               | torch.float32 |         | -6.9452815        | 8.5885248        | -0.0111084     | 0.6473768             | torch.Size([12, 192, 16, 44])    |
| 79      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.1.dwconv.0.1              | weight              | torch.float32 |         | 0.7282351         | 1.0837243        | 0.9034445      | 0.0050831             | torch.Size([192])                |
| 79      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.1.dwconv.0.1              | bias                | torch.float32 |         | -0.6764271        | 0.5594273        | 0.0210628      | 0.0563949             | torch.Size([192])                |
| 79      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.1.dwconv.0.1              | running_mean        | torch.float32 |         | -1.0809245        | 1.6137949        | -0.0142501     | 0.1247746             | torch.Size([192])                |
| 79      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.1.dwconv.0.1              | running_var         | torch.float32 |         | 0.0530638         | 1.9092911        | 0.4095812      | 0.0792093             | torch.Size([192])                |
| 79      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.1.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 79      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.1.dwconv.0.1              | output              | torch.float32 |         | -8.9138088        | 8.0453568        | 0.0183832      | 1.0771229             | torch.Size([12, 192, 16, 44])    |
| 80      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv1.0               | input               | torch.float32 |         | -8.9138088        | 8.0453568        | 0.0183832      | 1.0771229             | torch.Size([12, 192, 16, 44])    |
| 80      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv1.0               | weight              | torch.float32 |         | -0.3532308        | 0.3216734        | -0.0008758     | 0.0052960             | torch.Size([384, 192, 1, 1])     |
| 80      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv1.0               | bias                | torch.float32 |         | -0.3628721        | 0.0242217        | -0.1780612     | 0.0045064             | torch.Size([384])                |
| 80      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv1.0               | output              | torch.float32 |         | -10.3991508       | 8.7763357        | -0.9875951     | 2.1993332             | torch.Size([12, 384, 16, 44])    |
| 81      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.1.pwconv1.1               | input               | torch.float32 |         | -10.3991508       | 8.7763357        | -0.9875951     | 2.1993332             | torch.Size([12, 384, 16, 44])    |
| 81      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.1.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 8.7763357        | 0.1144830      | 0.2630005             | torch.Size([12, 384, 16, 44])    |
| 82      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 8.7763357        | 0.1144830      | 0.2630005             | torch.Size([12, 384, 16, 44])    |
| 82      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv2                 | weight              | torch.float32 |         | -0.2827839        | 0.2840055        | 0.0004407      | 0.0046291             | torch.Size([192, 384, 1, 1])     |
| 82      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv2                 | bias                | torch.float32 |         | -0.3973245        | 0.2550720        | 0.0058488      | 0.0130268             | torch.Size([192])                |
| 82      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.1.pwconv2                 | output              | torch.float32 |         | -9.3369627        | 9.0224752        | 0.0082527      | 1.0041457             | torch.Size([12, 192, 16, 44])    |
| 83      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.1.layer_scale             | input               | torch.float32 |         | -9.3369627        | 9.0224752        | 0.0082527      | 1.0041457             | torch.Size([12, 192, 16, 44])    |
| 83      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.1.layer_scale             | weight              | torch.float32 |         | 0.7897609         | 1.0862169        | 0.9377795      | 0.0035024             | torch.Size([192])                |
| 83      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.1.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 83      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.1.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 83      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.1.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 83      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.1.layer_scale             | output              | torch.float32 |         | -8.9780626        | 9.4907064        | 0.0069473      | 0.9084018             | torch.Size([12, 192, 16, 44])    |
| 84      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.1.add                     | input_0             | torch.float32 |         | -10.3597164       | 10.3309383       | -0.0332360     | 1.9369709             | torch.Size([12, 192, 16, 44])    |
| 84      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.1.add                     | input_1             | torch.float32 |         | -8.9780626        | 9.4907064        | 0.0069473      | 0.9084018             | torch.Size([12, 192, 16, 44])    |
| 84      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.1.add                     | output              | torch.float32 |         | -11.8506708       | 14.0329704       | -0.0262887     | 3.0484395             | torch.Size([12, 192, 16, 44])    |
| 85      | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.1.extra_act               | input               | torch.float32 |         | -11.8506708       | 14.0329704       | -0.0262887     | 3.0484395             | torch.Size([12, 192, 16, 44])    |
| 85      | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.1.extra_act               | output              | torch.float32 |         | -11.8506708       | 14.0329704       | -0.0262887     | 3.0484395             | torch.Size([12, 192, 16, 44])    |
| 86      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.dwconv.0.0              | input               | torch.float32 |         | -11.8506708       | 14.0329704       | -0.0262887     | 3.0484395             | torch.Size([12, 192, 16, 44])    |
| 86      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.dwconv.0.0              | weight              | torch.float32 |         | -0.6128289        | 0.7119914        | 0.0182508      | 0.0690985             | torch.Size([192, 1, 1, 5])       |
| 86      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.dwconv.0.0              | bias                | torch.float32 |         | -0.5299160        | 0.5730036        | -0.0281834     | 0.0687647             | torch.Size([192])                |
| 86      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.dwconv.0.0              | output              | torch.float32 |         | -11.5260286       | 9.7798166        | -0.0133601     | 1.1771753             | torch.Size([12, 192, 16, 44])    |
| 87      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.2.dwconv.0.1              | input               | torch.float32 |         | -11.5260286       | 9.7798166        | -0.0133601     | 1.1771753             | torch.Size([12, 192, 16, 44])    |
| 87      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.2.dwconv.0.1              | weight              | torch.float32 |         | 0.6536013         | 1.1233611        | 0.9119569      | 0.0073225             | torch.Size([192])                |
| 87      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.2.dwconv.0.1              | bias                | torch.float32 |         | -0.5087568        | 0.4723978        | 0.0093277      | 0.0438197             | torch.Size([192])                |
| 87      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.2.dwconv.0.1              | running_mean        | torch.float32 |         | -1.4991188        | 1.0763890        | -0.0280515     | 0.2027582             | torch.Size([192])                |
| 87      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.2.dwconv.0.1              | running_var         | torch.float32 |         | 0.0597716         | 4.3358984        | 0.7720059      | 0.4512167             | torch.Size([192])                |
| 87      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.2.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 87      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.2.dwconv.0.1              | output              | torch.float32 |         | -6.9297895        | 8.3141947        | 0.0255161      | 1.0606500             | torch.Size([12, 192, 16, 44])    |
| 88      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv1.0               | input               | torch.float32 |         | -6.9297895        | 8.3141947        | 0.0255161      | 1.0606500             | torch.Size([12, 192, 16, 44])    |
| 88      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv1.0               | weight              | torch.float32 |         | -0.3219835        | 0.3560996        | -0.0006409     | 0.0051941             | torch.Size([384, 192, 1, 1])     |
| 88      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv1.0               | bias                | torch.float32 |         | -0.3400693        | 0.0379616        | -0.1582377     | 0.0047251             | torch.Size([384])                |
| 88      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv1.0               | output              | torch.float32 |         | -13.2342663       | 9.6094189        | -0.9831254     | 1.9452211             | torch.Size([12, 384, 16, 44])    |
| 89      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.2.pwconv1.1               | input               | torch.float32 |         | -13.2342663       | 9.6094189        | -0.9831254     | 1.9452211             | torch.Size([12, 384, 16, 44])    |
| 89      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.2.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 9.6094189        | 0.0781857      | 0.1954734             | torch.Size([12, 384, 16, 44])    |
| 90      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 9.6094189        | 0.0781857      | 0.1954734             | torch.Size([12, 384, 16, 44])    |
| 90      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv2                 | weight              | torch.float32 |         | -0.2902825        | 0.2879244        | 0.0005698      | 0.0045591             | torch.Size([192, 384, 1, 1])     |
| 90      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv2                 | bias                | torch.float32 |         | -0.2883149        | 0.2269242        | 0.0011922      | 0.0108517             | torch.Size([192])                |
| 90      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.2.pwconv2                 | output              | torch.float32 |         | -6.4952421        | 8.5988722        | 0.0311774      | 0.6982683             | torch.Size([12, 192, 16, 44])    |
| 91      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.2.layer_scale             | input               | torch.float32 |         | -6.4952421        | 8.5988722        | 0.0311774      | 0.6982683             | torch.Size([12, 192, 16, 44])    |
| 91      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.2.layer_scale             | weight              | torch.float32 |         | 0.7919329         | 1.1133120        | 0.9540395      | 0.0033504             | torch.Size([192])                |
| 91      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.2.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 91      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.2.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 91      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.2.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 91      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.2.layer_scale             | output              | torch.float32 |         | -6.3596740        | 8.6757402        | 0.0325259      | 0.6481408             | torch.Size([12, 192, 16, 44])    |
| 92      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.2.add                     | input_0             | torch.float32 |         | -11.8506708       | 14.0329704       | -0.0262887     | 3.0484395             | torch.Size([12, 192, 16, 44])    |
| 92      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.2.add                     | input_1             | torch.float32 |         | -6.3596740        | 8.6757402        | 0.0325259      | 0.6481408             | torch.Size([12, 192, 16, 44])    |
| 92      | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.2.add                     | output              | torch.float32 |         | -15.4012299       | 15.6172733       | 0.0062373      | 4.0635004             | torch.Size([12, 192, 16, 44])    |
| 93      | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.2.extra_act               | input               | torch.float32 |         | -15.4012299       | 15.6172733       | 0.0062373      | 4.0635004             | torch.Size([12, 192, 16, 44])    |
| 93      | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.2.extra_act               | output              | torch.float32 |         | -15.4012299       | 15.6172733       | 0.0062373      | 4.0635004             | torch.Size([12, 192, 16, 44])    |
| 94      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.dwconv.0.0              | input               | torch.float32 |         | -15.4012299       | 15.6172733       | 0.0062373      | 4.0635004             | torch.Size([12, 192, 16, 44])    |
| 94      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.dwconv.0.0              | weight              | torch.float32 |         | -0.5962015        | 0.5861145        | 0.0146010      | 0.0673873             | torch.Size([192, 1, 5, 1])       |
| 94      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.dwconv.0.0              | bias                | torch.float32 |         | -0.4784470        | 0.5059028        | 0.0100960      | 0.0671344             | torch.Size([192])                |
| 94      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.dwconv.0.0              | output              | torch.float32 |         | -8.3439550        | 11.6843023       | 0.0210928      | 1.3656862             | torch.Size([12, 192, 16, 44])    |
| 95      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.3.dwconv.0.1              | input               | torch.float32 |         | -8.3439550        | 11.6843023       | 0.0210928      | 1.3656862             | torch.Size([12, 192, 16, 44])    |
| 95      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.3.dwconv.0.1              | weight              | torch.float32 |         | 0.7945942         | 1.1249197        | 0.9710380      | 0.0043983             | torch.Size([192])                |
| 95      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.3.dwconv.0.1              | bias                | torch.float32 |         | -0.6802742        | 0.5298053        | -0.0046460     | 0.0506481             | torch.Size([192])                |
| 95      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.3.dwconv.0.1              | running_mean        | torch.float32 |         | -2.0825279        | 1.5354877        | 0.0178287      | 0.2934828             | torch.Size([192])                |
| 95      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.3.dwconv.0.1              | running_var         | torch.float32 |         | 0.0549089         | 5.3001704        | 0.8684719      | 0.3879853             | torch.Size([192])                |
| 95      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.3.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 95      | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.3.dwconv.0.1              | output              | torch.float32 |         | -9.8949661        | 7.5539999        | -0.0034940     | 1.1709723             | torch.Size([12, 192, 16, 44])    |
| 96      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv1.0               | input               | torch.float32 |         | -9.8949661        | 7.5539999        | -0.0034940     | 1.1709723             | torch.Size([12, 192, 16, 44])    |
| 96      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv1.0               | weight              | torch.float32 |         | -0.3382373        | 0.3421775        | 0.0001951      | 0.0059827             | torch.Size([384, 192, 1, 1])     |
| 96      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv1.0               | bias                | torch.float32 |         | -0.3560182        | 0.0539114        | -0.1509591     | 0.0047631             | torch.Size([384])                |
| 96      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv1.0               | output              | torch.float32 |         | -12.3003693       | 8.1753111        | -1.0185406     | 2.3797545             | torch.Size([12, 384, 16, 44])    |
| 97      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.3.pwconv1.1               | input               | torch.float32 |         | -12.3003693       | 8.1753111        | -1.0185406     | 2.3797545             | torch.Size([12, 384, 16, 44])    |
| 97      | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.3.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 8.1753111        | 0.1231383      | 0.2718205             | torch.Size([12, 384, 16, 44])    |
| 98      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 8.1753111        | 0.1231383      | 0.2718205             | torch.Size([12, 384, 16, 44])    |
| 98      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv2                 | weight              | torch.float32 |         | -0.3167260        | 0.3085986        | 0.0000606      | 0.0053922             | torch.Size([192, 384, 1, 1])     |
| 98      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv2                 | bias                | torch.float32 |         | -0.2974765        | 0.2507665        | 0.0169243      | 0.0112521             | torch.Size([192])                |
| 98      | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.3.pwconv2                 | output              | torch.float32 |         | -8.0404940        | 9.4797039        | 0.0188270      | 1.0987995             | torch.Size([12, 192, 16, 44])    |
| 99      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.3.layer_scale             | input               | torch.float32 |         | -8.0404940        | 9.4797039        | 0.0188270      | 1.0987995             | torch.Size([12, 192, 16, 44])    |
| 99      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.3.layer_scale             | weight              | torch.float32 |         | 0.8339068         | 1.1734207        | 1.0387969      | 0.0034238             | torch.Size([192])                |
| 99      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.3.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 99      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.3.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 99      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.3.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 99      | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.3.layer_scale             | output              | torch.float32 |         | -8.7927933        | 10.7313452       | 0.0170129      | 1.2019128             | torch.Size([12, 192, 16, 44])    |
| 100     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.3.add                     | input_0             | torch.float32 |         | -15.4012299       | 15.6172733       | 0.0062373      | 4.0635004             | torch.Size([12, 192, 16, 44])    |
| 100     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.3.add                     | input_1             | torch.float32 |         | -8.7927933        | 10.7313452       | 0.0170129      | 1.2019128             | torch.Size([12, 192, 16, 44])    |
| 100     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.3.add                     | output              | torch.float32 |         | -15.2083511       | 17.7171707       | 0.0232502      | 5.8812571             | torch.Size([12, 192, 16, 44])    |
| 101     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.3.extra_act               | input               | torch.float32 |         | -15.2083511       | 17.7171707       | 0.0232502      | 5.8812571             | torch.Size([12, 192, 16, 44])    |
| 101     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.3.extra_act               | output              | torch.float32 |         | -15.2083511       | 17.7171707       | 0.0232502      | 5.8812571             | torch.Size([12, 192, 16, 44])    |
| 102     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.dwconv.0.0              | input               | torch.float32 |         | -15.2083511       | 17.7171707       | 0.0232502      | 5.8812571             | torch.Size([12, 192, 16, 44])    |
| 102     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.dwconv.0.0              | weight              | torch.float32 |         | -0.5330904        | 0.6114874        | 0.0043412      | 0.0650963             | torch.Size([192, 1, 1, 5])       |
| 102     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.dwconv.0.0              | bias                | torch.float32 |         | -0.5420189        | 0.4745232        | -0.0159653     | 0.0677277             | torch.Size([192])                |
| 102     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.dwconv.0.0              | output              | torch.float32 |         | -10.7636347       | 14.0062170       | 0.0359445      | 1.4429196             | torch.Size([12, 192, 16, 44])    |
| 103     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.4.dwconv.0.1              | input               | torch.float32 |         | -10.7636347       | 14.0062170       | 0.0359445      | 1.4429196             | torch.Size([12, 192, 16, 44])    |
| 103     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.4.dwconv.0.1              | weight              | torch.float32 |         | 0.7304041         | 1.1407574        | 0.9398459      | 0.0056892             | torch.Size([192])                |
| 103     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.4.dwconv.0.1              | bias                | torch.float32 |         | -0.6024958        | 0.5697968        | 0.0308373      | 0.0437604             | torch.Size([192])                |
| 103     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.4.dwconv.0.1              | running_mean        | torch.float32 |         | -1.7761201        | 2.0776517        | 0.0421978      | 0.3298974             | torch.Size([192])                |
| 103     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.4.dwconv.0.1              | running_var         | torch.float32 |         | 0.0482749         | 7.0475302        | 0.9203412      | 0.7549338             | torch.Size([192])                |
| 103     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.4.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 103     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.4.dwconv.0.1              | output              | torch.float32 |         | -7.4886360        | 9.0110836        | 0.0280019      | 1.0895543             | torch.Size([12, 192, 16, 44])    |
| 104     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv1.0               | input               | torch.float32 |         | -7.4886360        | 9.0110836        | 0.0280019      | 1.0895543             | torch.Size([12, 192, 16, 44])    |
| 104     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv1.0               | weight              | torch.float32 |         | -0.3992376        | 0.3128951        | -0.0014283     | 0.0053068             | torch.Size([384, 192, 1, 1])     |
| 104     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv1.0               | bias                | torch.float32 |         | -0.4347230        | 0.0640009        | -0.1505269     | 0.0066143             | torch.Size([384])                |
| 104     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv1.0               | output              | torch.float32 |         | -14.0232773       | 9.6843910        | -0.8602848     | 1.6743836             | torch.Size([12, 384, 16, 44])    |
| 105     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.4.pwconv1.1               | input               | torch.float32 |         | -14.0232773       | 9.6843910        | -0.8602848     | 1.6743836             | torch.Size([12, 384, 16, 44])    |
| 105     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.4.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 9.6843910        | 0.0776744      | 0.1963410             | torch.Size([12, 384, 16, 44])    |
| 106     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 9.6843910        | 0.0776744      | 0.1963410             | torch.Size([12, 384, 16, 44])    |
| 106     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv2                 | weight              | torch.float32 |         | -0.3134340        | 0.2695324        | 0.0005416      | 0.0045821             | torch.Size([192, 384, 1, 1])     |
| 106     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv2                 | bias                | torch.float32 |         | -0.2612157        | 0.2155838        | 0.0139778      | 0.0088484             | torch.Size([192])                |
| 106     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.4.pwconv2                 | output              | torch.float32 |         | -8.4154520        | 11.3440275       | 0.0307775      | 0.6790407             | torch.Size([12, 192, 16, 44])    |
| 107     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.4.layer_scale             | input               | torch.float32 |         | -8.4154520        | 11.3440275       | 0.0307775      | 0.6790407             | torch.Size([12, 192, 16, 44])    |
| 107     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.4.layer_scale             | weight              | torch.float32 |         | 0.7801780         | 1.2545878        | 0.9743940      | 0.0054589             | torch.Size([192])                |
| 107     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.4.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 107     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.4.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 107     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.4.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 107     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.4.layer_scale             | output              | torch.float32 |         | -9.8647299        | 11.4589539       | 0.0332966      | 0.6688427             | torch.Size([12, 192, 16, 44])    |
| 108     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.4.add                     | input_0             | torch.float32 |         | -15.2083511       | 17.7171707       | 0.0232502      | 5.8812571             | torch.Size([12, 192, 16, 44])    |
| 108     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.4.add                     | input_1             | torch.float32 |         | -9.8647299        | 11.4589539       | 0.0332966      | 0.6688427             | torch.Size([12, 192, 16, 44])    |
| 108     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.4.add                     | output              | torch.float32 |         | -17.7341824       | 18.6704311       | 0.0565469      | 6.8229456             | torch.Size([12, 192, 16, 44])    |
| 109     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.4.extra_act               | input               | torch.float32 |         | -17.7341824       | 18.6704311       | 0.0565469      | 6.8229456             | torch.Size([12, 192, 16, 44])    |
| 109     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.4.extra_act               | output              | torch.float32 |         | -17.7341824       | 18.6704311       | 0.0565469      | 6.8229456             | torch.Size([12, 192, 16, 44])    |
| 110     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.dwconv.0.0              | input               | torch.float32 |         | -17.7341824       | 18.6704311       | 0.0565469      | 6.8229456             | torch.Size([12, 192, 16, 44])    |
| 110     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.dwconv.0.0              | weight              | torch.float32 |         | -0.5402265        | 0.5530636        | 0.0142299      | 0.0708573             | torch.Size([192, 1, 5, 1])       |
| 110     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.dwconv.0.0              | bias                | torch.float32 |         | -0.5847846        | 0.5125430        | -0.0211482     | 0.0682339             | torch.Size([192])                |
| 110     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.dwconv.0.0              | output              | torch.float32 |         | -16.0433750       | 12.9703875       | -0.0802976     | 2.4918683             | torch.Size([12, 192, 16, 44])    |
| 111     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.5.dwconv.0.1              | input               | torch.float32 |         | -16.0433750       | 12.9703875       | -0.0802976     | 2.4918683             | torch.Size([12, 192, 16, 44])    |
| 111     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.5.dwconv.0.1              | weight              | torch.float32 |         | 0.8236712         | 1.1603099        | 1.0094694      | 0.0046198             | torch.Size([192])                |
| 111     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.5.dwconv.0.1              | bias                | torch.float32 |         | -0.6199875        | 0.5646603        | -0.0141269     | 0.0539969             | torch.Size([192])                |
| 111     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.5.dwconv.0.1              | running_mean        | torch.float32 |         | -3.1700313        | 2.8130803        | -0.0745013     | 0.6874861             | torch.Size([192])                |
| 111     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.5.dwconv.0.1              | running_var         | torch.float32 |         | 0.2607865         | 7.2245145        | 1.4898851      | 0.9759664             | torch.Size([192])                |
| 111     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.5.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 111     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.5.dwconv.0.1              | output              | torch.float32 |         | -9.1854744        | 8.9381323        | -0.0163969     | 1.2424544             | torch.Size([12, 192, 16, 44])    |
| 112     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv1.0               | input               | torch.float32 |         | -9.1854744        | 8.9381323        | -0.0163969     | 1.2424544             | torch.Size([12, 192, 16, 44])    |
| 112     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv1.0               | weight              | torch.float32 |         | -0.3696525        | 0.3289170        | 0.0014847      | 0.0064055             | torch.Size([384, 192, 1, 1])     |
| 112     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv1.0               | bias                | torch.float32 |         | -0.4440894        | 0.0760342        | -0.1446273     | 0.0064442             | torch.Size([384])                |
| 112     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv1.0               | output              | torch.float32 |         | -13.6349430       | 11.2012224       | -1.0644051     | 2.5562632             | torch.Size([12, 384, 16, 44])    |
| 113     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.5.pwconv1.1               | input               | torch.float32 |         | -13.6349430       | 11.2012224       | -1.0644051     | 2.5562632             | torch.Size([12, 384, 16, 44])    |
| 113     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.5.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 11.2012224       | 0.1334291      | 0.3019432             | torch.Size([12, 384, 16, 44])    |
| 114     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 11.2012224       | 0.1334291      | 0.3019432             | torch.Size([12, 384, 16, 44])    |
| 114     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv2                 | weight              | torch.float32 |         | -0.3081445        | 0.3557028        | 0.0014857      | 0.0059612             | torch.Size([192, 384, 1, 1])     |
| 114     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv2                 | bias                | torch.float32 |         | -0.2053056        | 0.4145362        | 0.0174498      | 0.0091007             | torch.Size([192])                |
| 114     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.5.pwconv2                 | output              | torch.float32 |         | -13.1815681       | 11.6183100       | 0.1142727      | 1.4585367             | torch.Size([12, 192, 16, 44])    |
| 115     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.5.layer_scale             | input               | torch.float32 |         | -13.1815681       | 11.6183100       | 0.1142727      | 1.4585367             | torch.Size([12, 192, 16, 44])    |
| 115     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.5.layer_scale             | weight              | torch.float32 |         | 0.8864720         | 1.3307509        | 1.1050286      | 0.0052255             | torch.Size([192])                |
| 115     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.5.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 115     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.5.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 115     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.5.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 115     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.5.layer_scale             | output              | torch.float32 |         | -14.1320868       | 13.1794119       | 0.1267129      | 1.8036407             | torch.Size([12, 192, 16, 44])    |
| 116     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.5.add                     | input_0             | torch.float32 |         | -17.7341824       | 18.6704311       | 0.0565469      | 6.8229456             | torch.Size([12, 192, 16, 44])    |
| 116     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.5.add                     | input_1             | torch.float32 |         | -14.1320868       | 13.1794119       | 0.1267129      | 1.8036407             | torch.Size([12, 192, 16, 44])    |
| 116     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.5.add                     | output              | torch.float32 |         | -18.1537857       | 22.3876057       | 0.1832598      | 9.5190907             | torch.Size([12, 192, 16, 44])    |
| 117     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.5.extra_act               | input               | torch.float32 |         | -18.1537857       | 22.3876057       | 0.1832598      | 9.5190907             | torch.Size([12, 192, 16, 44])    |
| 117     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.5.extra_act               | output              | torch.float32 |         | -18.1537857       | 22.3876057       | 0.1832598      | 9.5190907             | torch.Size([12, 192, 16, 44])    |
| 118     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.dwconv.0.0              | input               | torch.float32 |         | -18.1537857       | 22.3876057       | 0.1832598      | 9.5190907             | torch.Size([12, 192, 16, 44])    |
| 118     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.dwconv.0.0              | weight              | torch.float32 |         | -0.5975668        | 0.5950474        | -0.0048354     | 0.0718016             | torch.Size([192, 1, 1, 5])       |
| 118     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.dwconv.0.0              | bias                | torch.float32 |         | -0.5189757        | 0.5069410        | 0.0116194      | 0.0754684             | torch.Size([192])                |
| 118     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.dwconv.0.0              | output              | torch.float32 |         | -20.8546352       | 19.3089294       | 0.1749154      | 3.4008048             | torch.Size([12, 192, 16, 44])    |
| 119     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.6.dwconv.0.1              | input               | torch.float32 |         | -20.8546352       | 19.3089294       | 0.1749154      | 3.4008048             | torch.Size([12, 192, 16, 44])    |
| 119     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.6.dwconv.0.1              | weight              | torch.float32 |         | 0.7038018         | 1.2067878        | 0.9610404      | 0.0080899             | torch.Size([192])                |
| 119     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.6.dwconv.0.1              | bias                | torch.float32 |         | -0.5408224        | 0.5371132        | 0.0113589      | 0.0354455             | torch.Size([192])                |
| 119     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.6.dwconv.0.1              | running_mean        | torch.float32 |         | -3.6576855        | 4.1293530        | 0.1521503      | 0.9975373             | torch.Size([192])                |
| 119     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.6.dwconv.0.1              | running_var         | torch.float32 |         | 0.1459446         | 9.2413130        | 2.0314293      | 2.7592115             | torch.Size([192])                |
| 119     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.6.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 119     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.6.dwconv.0.1              | output              | torch.float32 |         | -9.5836935        | 9.5821686        | 0.0277743      | 1.1092037             | torch.Size([12, 192, 16, 44])    |
| 120     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv1.0               | input               | torch.float32 |         | -9.5836935        | 9.5821686        | 0.0277743      | 1.1092037             | torch.Size([12, 192, 16, 44])    |
| 120     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv1.0               | weight              | torch.float32 |         | -0.3172777        | 0.4361245        | 0.0001557      | 0.0056055             | torch.Size([384, 192, 1, 1])     |
| 120     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv1.0               | bias                | torch.float32 |         | -0.4558428        | 0.1022655        | -0.1479584     | 0.0081169             | torch.Size([384])                |
| 120     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv1.0               | output              | torch.float32 |         | -13.8179808       | 10.3887157       | -0.7830464     | 1.9622719             | torch.Size([12, 384, 16, 44])    |
| 121     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.6.pwconv1.1               | input               | torch.float32 |         | -13.8179808       | 10.3887157       | -0.7830464     | 1.9622719             | torch.Size([12, 384, 16, 44])    |
| 121     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.6.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 10.3887157       | 0.1293209      | 0.2779395             | torch.Size([12, 384, 16, 44])    |
| 122     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 10.3887157       | 0.1293209      | 0.2779395             | torch.Size([12, 384, 16, 44])    |
| 122     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv2                 | weight              | torch.float32 |         | -0.2930687        | 0.2915065        | 0.0004026      | 0.0049942             | torch.Size([192, 384, 1, 1])     |
| 122     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv2                 | bias                | torch.float32 |         | -0.3512236        | 0.3008186        | 0.0124631      | 0.0088110             | torch.Size([192])                |
| 122     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.6.pwconv2                 | output              | torch.float32 |         | -12.1880713       | 13.7640905       | 0.0190472      | 1.3274240             | torch.Size([12, 192, 16, 44])    |
| 123     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.6.layer_scale             | input               | torch.float32 |         | -12.1880713       | 13.7640905       | 0.0190472      | 1.3274240             | torch.Size([12, 192, 16, 44])    |
| 123     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.6.layer_scale             | weight              | torch.float32 |         | 0.8399817         | 1.1830213        | 1.0177159      | 0.0038775             | torch.Size([192])                |
| 123     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.6.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 123     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.6.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 123     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.6.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 123     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.6.layer_scale             | output              | torch.float32 |         | -12.6569405       | 15.4861441       | 0.0220145      | 1.4055257             | torch.Size([12, 192, 16, 44])    |
| 124     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.6.add                     | input_0             | torch.float32 |         | -18.1537857       | 22.3876057       | 0.1832598      | 9.5190907             | torch.Size([12, 192, 16, 44])    |
| 124     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.6.add                     | input_1             | torch.float32 |         | -12.6569405       | 15.4861441       | 0.0220145      | 1.4055257             | torch.Size([12, 192, 16, 44])    |
| 124     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.6.add                     | output              | torch.float32 |         | -21.6081238       | 25.8851891       | 0.2052742      | 11.2126093            | torch.Size([12, 192, 16, 44])    |
| 125     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.6.extra_act               | input               | torch.float32 |         | -21.6081238       | 25.8851891       | 0.2052742      | 11.2126093            | torch.Size([12, 192, 16, 44])    |
| 125     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.6.extra_act               | output              | torch.float32 |         | -21.6081238       | 25.8851891       | 0.2052742      | 11.2126093            | torch.Size([12, 192, 16, 44])    |
| 126     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.dwconv.0.0              | input               | torch.float32 |         | -21.6081238       | 25.8851891       | 0.2052742      | 11.2126093            | torch.Size([12, 192, 16, 44])    |
| 126     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.dwconv.0.0              | weight              | torch.float32 |         | -0.5935198        | 0.5644848        | 0.0008043      | 0.0701117             | torch.Size([192, 1, 5, 1])       |
| 126     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.dwconv.0.0              | bias                | torch.float32 |         | -0.6084507        | 0.5213525        | -0.0022144     | 0.0689186             | torch.Size([192])                |
| 126     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.dwconv.0.0              | output              | torch.float32 |         | -18.9869595       | 13.9503441       | 0.0368284      | 3.4201739             | torch.Size([12, 192, 16, 44])    |
| 127     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.7.dwconv.0.1              | input               | torch.float32 |         | -18.9869595       | 13.9503441       | 0.0368284      | 3.4201739             | torch.Size([12, 192, 16, 44])    |
| 127     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.7.dwconv.0.1              | weight              | torch.float32 |         | 0.7567621         | 1.1853398        | 1.0145739      | 0.0058836             | torch.Size([192])                |
| 127     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.7.dwconv.0.1              | bias                | torch.float32 |         | -0.6009705        | 0.6247410        | 0.0049401      | 0.0484053             | torch.Size([192])                |
| 127     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.7.dwconv.0.1              | running_mean        | torch.float32 |         | -3.0080905        | 2.7594349        | 0.0286695      | 0.8560447             | torch.Size([192])                |
| 127     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.7.dwconv.0.1              | running_var         | torch.float32 |         | 0.3458993         | 10.2207146       | 2.3147607      | 1.9694462             | torch.Size([192])                |
| 127     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.7.dwconv.0.1              | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 127     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.2.block.7.dwconv.0.1              | output              | torch.float32 |         | -10.7470798       | 9.0256386        | 0.0083494      | 1.1785823             | torch.Size([12, 192, 16, 44])    |
| 128     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv1.0               | input               | torch.float32 |         | -10.7470798       | 9.0256386        | 0.0083494      | 1.1785823             | torch.Size([12, 192, 16, 44])    |
| 128     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv1.0               | weight              | torch.float32 |         | -0.3543261        | 0.3535915        | 0.0000997      | 0.0064399             | torch.Size([384, 192, 1, 1])     |
| 128     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv1.0               | bias                | torch.float32 |         | -0.3648984        | 0.0541026        | -0.1423695     | 0.0057704             | torch.Size([384])                |
| 128     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv1.0               | output              | torch.float32 |         | -13.3541059       | 12.8048468       | -1.0168190     | 2.4211800             | torch.Size([12, 384, 16, 44])    |
| 129     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.7.pwconv1.1               | input               | torch.float32 |         | -13.3541059       | 12.8048468       | -1.0168190     | 2.4211800             | torch.Size([12, 384, 16, 44])    |
| 129     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.2.block.7.pwconv1.1               | output              | torch.float32 |         | -0.1699712        | 12.8048468       | 0.1275116      | 0.3268560             | torch.Size([12, 384, 16, 44])    |
| 130     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 12.8048468       | 0.1275116      | 0.3268560             | torch.Size([12, 384, 16, 44])    |
| 130     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv2                 | weight              | torch.float32 |         | -0.3659763        | 0.3288255        | -0.0001935     | 0.0062431             | torch.Size([192, 384, 1, 1])     |
| 130     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv2                 | bias                | torch.float32 |         | -0.1630916        | 0.1800659        | 0.0032293      | 0.0038957             | torch.Size([192])                |
| 130     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.2.block.7.pwconv2                 | output              | torch.float32 |         | -21.0859909       | 21.9957619       | -0.0282085     | 3.7920682             | torch.Size([12, 192, 16, 44])    |
| 131     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.7.layer_scale             | input               | torch.float32 |         | -21.0859909       | 21.9957619       | -0.0282085     | 3.7920682             | torch.Size([12, 192, 16, 44])    |
| 131     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.7.layer_scale             | weight              | torch.float32 |         | 0.9109059         | 1.3151405        | 1.1216917      | 0.0040952             | torch.Size([192])                |
| 131     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.7.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 131     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.7.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 131     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.7.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 131     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.2.block.7.layer_scale             | output              | torch.float32 |         | -23.9092674       | 22.9788857       | -0.0326577     | 4.7884741             | torch.Size([12, 192, 16, 44])    |
| 132     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.7.add                     | input_0             | torch.float32 |         | -21.6081238       | 25.8851891       | 0.2052742      | 11.2126093            | torch.Size([12, 192, 16, 44])    |
| 132     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.7.add                     | input_1             | torch.float32 |         | -23.9092674       | 22.9788857       | -0.0326577     | 4.7884741             | torch.Size([12, 192, 16, 44])    |
| 132     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.2.block.7.add                     | output              | torch.float32 |         | -33.6699448       | 33.4728813       | 0.1726165      | 18.4585190            | torch.Size([12, 192, 16, 44])    |
| 133     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.7.extra_act               | input               | torch.float32 |         | -33.6699448       | 33.4728813       | 0.1726165      | 18.4585190            | torch.Size([12, 192, 16, 44])    |
| 133     | torch.nn.modules.linear.Identity                                                  | backbone.stages.2.block.7.extra_act               | output              | torch.float32 |         | -33.6699448       | 33.4728813       | 0.1726165      | 18.4585190            | torch.Size([12, 192, 16, 44])    |
| 134     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.2                             | input               | torch.float32 |         | -33.6699448       | 33.4728813       | 0.1726165      | 18.4585190            | torch.Size([12, 192, 16, 44])    |
| 134     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.2                             | weight              | torch.float32 |         | 0.4536798         | 0.8310655        | 0.6572363      | 0.0057898             | torch.Size([192])                |
| 134     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.2                             | bias                | torch.float32 |         | -0.1366851        | 0.1372305        | 0.0004346      | 0.0031207             | torch.Size([192])                |
| 134     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.2                             | running_mean        | torch.float32 |         | -5.9906597        | 4.8830447        | 0.2049369      | 3.5092514             | torch.Size([192])                |
| 134     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.2                             | running_var         | torch.float32 |         | 6.6480818         | 34.8125572       | 19.6136646     | 31.4906311            | torch.Size([192])                |
| 134     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.2                             | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 134     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.2                             | output              | torch.float32 |         | -7.4979534        | 8.5700006        | -0.0048172     | 0.3464860             | torch.Size([12, 192, 16, 44])    |
| 135     | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.2.proj.0                | input               | torch.float32 |         | -33.6699448       | 33.4728813       | 0.1726165      | 18.4585190            | torch.Size([12, 192, 16, 44])    |
| 135     | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.2.proj.0                | weight              | torch.float32 |         | -0.3581207        | 0.3301987        | 0.0000597      | 0.0040129             | torch.Size([384, 192, 2, 2])     |
| 135     | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.2.proj.0                | bias                | torch.float32 |         | -0.1371177        | 0.1387143        | 0.0012109      | 0.0021213             | torch.Size([384])                |
| 135     | torch.nn.modules.conv.Conv2d                                                      | backbone.downsample_block.2.proj.0                | output              | torch.float32 |         | -228.6954193      | 257.0709534      | 0.7765009      | 467.4173584           | torch.Size([12, 384, 8, 22])     |
| 136     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.2.proj.1                | input               | torch.float32 |         | -228.6954193      | 257.0709534      | 0.7765009      | 467.4173584           | torch.Size([12, 384, 8, 22])     |
| 136     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.2.proj.1                | weight              | torch.float32 |         | 0.7551901         | 1.3569726        | 1.0814337      | 0.0062585             | torch.Size([384])                |
| 136     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.2.proj.1                | bias                | torch.float32 |         | -0.5738057        | 0.4509688        | -0.0058685     | 0.0183539             | torch.Size([384])                |
| 136     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.2.proj.1                | running_mean        | torch.float32 |         | -36.6323395       | 39.9327927       | 0.8269669      | 141.1866608           | torch.Size([384])                |
| 136     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.2.proj.1                | running_var         | torch.float32 |         | 108.3596344       | 4295.3315430     | 392.6242065    | 359435.0625000        | torch.Size([384])                |
| 136     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.2.proj.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 136     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.downsample_block.2.proj.1                | output              | torch.float32 |         | -9.9081717        | 11.4464636       | -0.0088131     | 1.2565665             | torch.Size([12, 384, 8, 22])     |
| 137     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.dwconv.0                | input               | torch.float32 |         | -9.9081717        | 11.4464636       | -0.0088131     | 1.2565665             | torch.Size([12, 384, 8, 22])     |
| 137     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.dwconv.0                | weight              | torch.float32 |         | -0.4570907        | 0.4408599        | -0.0025073     | 0.0404489             | torch.Size([384, 1, 3, 3])       |
| 137     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.dwconv.0                | bias                | torch.float32 |         | -0.3661686        | 0.4089541        | 0.0015946      | 0.0379449             | torch.Size([384])                |
| 137     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.dwconv.0                | output              | torch.float32 |         | -6.7926812        | 6.1117482        | 0.0029854      | 0.3967720             | torch.Size([12, 384, 8, 22])     |
| 138     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.0.dwconv.1                | input               | torch.float32 |         | -6.7926812        | 6.1117482        | 0.0029854      | 0.3967720             | torch.Size([12, 384, 8, 22])     |
| 138     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.0.dwconv.1                | weight              | torch.float32 |         | 0.5315000         | 1.1108470        | 0.8423946      | 0.0080709             | torch.Size([384])                |
| 138     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.0.dwconv.1                | bias                | torch.float32 |         | -0.8067595        | 0.8567720        | 0.0033822      | 0.0702048             | torch.Size([384])                |
| 138     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.0.dwconv.1                | running_mean        | torch.float32 |         | -0.5802938        | 0.5430273        | 0.0040959      | 0.0451177             | torch.Size([384])                |
| 138     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.0.dwconv.1                | running_var         | torch.float32 |         | 0.0138209         | 2.1549993        | 0.3200104      | 0.0568956             | torch.Size([384])                |
| 138     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.0.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 138     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.0.dwconv.1                | output              | torch.float32 |         | -8.7443466        | 9.3466692        | 0.0041502      | 0.8545694             | torch.Size([12, 384, 8, 22])     |
| 139     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv1                 | input               | torch.float32 |         | -8.7443466        | 9.3466692        | 0.0041502      | 0.8545694             | torch.Size([12, 384, 8, 22])     |
| 139     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv1                 | weight              | torch.float32 |         | -0.3454532        | 0.4094583        | -0.0002953     | 0.0041955             | torch.Size([1152, 384, 1, 1])    |
| 139     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv1                 | bias                | torch.float32 |         | -0.3558742        | 0.1381556        | -0.1628864     | 0.0041367             | torch.Size([1152])               |
| 139     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv1                 | output              | torch.float32 |         | -14.3476934       | 12.9093990       | -1.8663913     | 4.1024346             | torch.Size([12, 1152, 8, 22])    |
| 140     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.0.act                     | input               | torch.float32 |         | -14.3476934       | 12.9093990       | -1.8663913     | 4.1024346             | torch.Size([12, 1152, 8, 22])    |
| 140     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.0.act                     | output              | torch.float32 |         | -0.1699712        | 12.9093990       | 0.1139065      | 0.3201028             | torch.Size([12, 1152, 8, 22])    |
| 141     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 12.9093990       | 0.1139065      | 0.3201028             | torch.Size([12, 1152, 8, 22])    |
| 141     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv2                 | weight              | torch.float32 |         | -0.3008121        | 0.2976886        | 0.0002213      | 0.0037949             | torch.Size([384, 1152, 1, 1])    |
| 141     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv2                 | bias                | torch.float32 |         | -0.3131243        | 0.2822200        | -0.0056069     | 0.0081538             | torch.Size([384])                |
| 141     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.0.pwconv2                 | output              | torch.float32 |         | -22.6099358       | 21.8921757       | 0.0308203      | 5.2966375             | torch.Size([12, 384, 8, 22])     |
| 142     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.0.layer_scale             | input               | torch.float32 |         | -22.6099358       | 21.8921757       | 0.0308203      | 5.2966375             | torch.Size([12, 384, 8, 22])     |
| 142     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.0.layer_scale             | weight              | torch.float32 |         | 0.6905497         | 1.0868244        | 0.8659732      | 0.0045470             | torch.Size([384])                |
| 142     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.0.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 142     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.0.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 142     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.0.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 142     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.0.layer_scale             | output              | torch.float32 |         | -21.6330452       | 19.7475033       | 0.0287950      | 4.1279097             | torch.Size([12, 384, 8, 22])     |
| 143     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.0.add                     | input_0             | torch.float32 |         | -9.9081717        | 11.4464636       | -0.0088131     | 1.2565665             | torch.Size([12, 384, 8, 22])     |
| 143     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.0.add                     | input_1             | torch.float32 |         | -21.6330452       | 19.7475033       | 0.0287950      | 4.1279097             | torch.Size([12, 384, 8, 22])     |
| 143     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.0.add                     | output              | torch.float32 |         | -22.0743313       | 21.0118847       | 0.0199819      | 6.0956726             | torch.Size([12, 384, 8, 22])     |
| 144     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.0.extra_act               | input               | torch.float32 |         | -22.0743313       | 21.0118847       | 0.0199819      | 6.0956726             | torch.Size([12, 384, 8, 22])     |
| 144     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.0.extra_act               | output              | torch.float32 |         | -22.0743313       | 21.0118847       | 0.0199819      | 6.0956726             | torch.Size([12, 384, 8, 22])     |
| 145     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.dwconv.0                | input               | torch.float32 |         | -22.0743313       | 21.0118847       | 0.0199819      | 6.0956726             | torch.Size([12, 384, 8, 22])     |
| 145     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.dwconv.0                | weight              | torch.float32 |         | -0.5332693        | 0.5523274        | 0.0015292      | 0.0392474             | torch.Size([384, 1, 3, 3])       |
| 145     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.dwconv.0                | bias                | torch.float32 |         | -0.3770306        | 0.4064817        | -0.0046409     | 0.0387838             | torch.Size([384])                |
| 145     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.dwconv.0                | output              | torch.float32 |         | -14.6579456       | 19.5053635       | -0.0384865     | 2.0543215             | torch.Size([12, 384, 8, 22])     |
| 146     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.1.dwconv.1                | input               | torch.float32 |         | -14.6579456       | 19.5053635       | -0.0384865     | 2.0543215             | torch.Size([12, 384, 8, 22])     |
| 146     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.1.dwconv.1                | weight              | torch.float32 |         | 0.6647795         | 1.0914171        | 0.8920445      | 0.0065015             | torch.Size([384])                |
| 146     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.1.dwconv.1                | bias                | torch.float32 |         | -0.8801027        | 0.6669436        | -0.0148397     | 0.0694731             | torch.Size([384])                |
| 146     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.1.dwconv.1                | running_mean        | torch.float32 |         | -2.5848281        | 2.9301550        | -0.0296885     | 0.3714691             | torch.Size([384])                |
| 146     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.1.dwconv.1                | running_var         | torch.float32 |         | 0.2266718         | 9.7231741        | 1.4361131      | 1.3633128             | torch.Size([384])                |
| 146     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.1.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 146     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.1.dwconv.1                | output              | torch.float32 |         | -9.3039246        | 7.6660447        | -0.0201307     | 1.0004158             | torch.Size([12, 384, 8, 22])     |
| 147     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv1                 | input               | torch.float32 |         | -9.3039246        | 7.6660447        | -0.0201307     | 1.0004158             | torch.Size([12, 384, 8, 22])     |
| 147     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv1                 | weight              | torch.float32 |         | -0.3329572        | 0.3142635        | 0.0011703      | 0.0044448             | torch.Size([1152, 384, 1, 1])    |
| 147     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv1                 | bias                | torch.float32 |         | -0.3428729        | 0.0749362        | -0.1486899     | 0.0037087             | torch.Size([1152])               |
| 147     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv1                 | output              | torch.float32 |         | -21.0137978       | 11.2672968       | -2.0539994     | 5.1375699             | torch.Size([12, 1152, 8, 22])    |
| 148     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.1.act                     | input               | torch.float32 |         | -21.0137978       | 11.2672968       | -2.0539994     | 5.1375699             | torch.Size([12, 1152, 8, 22])    |
| 148     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.1.act                     | output              | torch.float32 |         | -0.1699712        | 11.2672968       | 0.0959923      | 0.2609312             | torch.Size([12, 1152, 8, 22])    |
| 149     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 11.2672968       | 0.0959923      | 0.2609312             | torch.Size([12, 1152, 8, 22])    |
| 149     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv2                 | weight              | torch.float32 |         | -0.3066681        | 0.3176296        | 0.0000583      | 0.0040402             | torch.Size([384, 1152, 1, 1])    |
| 149     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv2                 | bias                | torch.float32 |         | -0.2372192        | 0.2512781        | -0.0025833     | 0.0062463             | torch.Size([384])                |
| 149     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.1.pwconv2                 | output              | torch.float32 |         | -20.1666222       | 21.9970722       | -0.0327530     | 4.4621801             | torch.Size([12, 384, 8, 22])     |
| 150     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.1.layer_scale             | input               | torch.float32 |         | -20.1666222       | 21.9970722       | -0.0327530     | 4.4621801             | torch.Size([12, 384, 8, 22])     |
| 150     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.1.layer_scale             | weight              | torch.float32 |         | 0.7492935         | 1.1506159        | 0.9529823      | 0.0043197             | torch.Size([384])                |
| 150     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.1.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 150     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.1.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 150     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.1.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 150     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.1.layer_scale             | output              | torch.float32 |         | -19.8962116       | 22.2852077       | -0.0349438     | 4.1898746             | torch.Size([12, 384, 8, 22])     |
| 151     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.1.add                     | input_0             | torch.float32 |         | -22.0743313       | 21.0118847       | 0.0199819      | 6.0956726             | torch.Size([12, 384, 8, 22])     |
| 151     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.1.add                     | input_1             | torch.float32 |         | -19.8962116       | 22.2852077       | -0.0349438     | 4.1898746             | torch.Size([12, 384, 8, 22])     |
| 151     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.1.add                     | output              | torch.float32 |         | -25.1033325       | 30.7690468       | -0.0149619     | 12.6348553            | torch.Size([12, 384, 8, 22])     |
| 152     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.1.extra_act               | input               | torch.float32 |         | -25.1033325       | 30.7690468       | -0.0149619     | 12.6348553            | torch.Size([12, 384, 8, 22])     |
| 152     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.1.extra_act               | output              | torch.float32 |         | -25.1033325       | 30.7690468       | -0.0149619     | 12.6348553            | torch.Size([12, 384, 8, 22])     |
| 153     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.dwconv.0                | input               | torch.float32 |         | -25.1033325       | 30.7690468       | -0.0149619     | 12.6348553            | torch.Size([12, 384, 8, 22])     |
| 153     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.dwconv.0                | weight              | torch.float32 |         | -0.4669346        | 0.4735376        | 0.0004679      | 0.0395380             | torch.Size([384, 1, 3, 3])       |
| 153     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.dwconv.0                | bias                | torch.float32 |         | -0.4246657        | 0.4471672        | -0.0254966     | 0.0371991             | torch.Size([384])                |
| 153     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.dwconv.0                | output              | torch.float32 |         | -20.8368874       | 23.6716232       | 0.0466528      | 3.7792904             | torch.Size([12, 384, 8, 22])     |
| 154     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.2.dwconv.1                | input               | torch.float32 |         | -20.8368874       | 23.6716232       | 0.0466528      | 3.7792904             | torch.Size([12, 384, 8, 22])     |
| 154     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.2.dwconv.1                | weight              | torch.float32 |         | 0.5563363         | 1.1296606        | 0.9101324      | 0.0077815             | torch.Size([384])                |
| 154     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.2.dwconv.1                | bias                | torch.float32 |         | -0.6704720        | 0.6690274        | 0.0054722      | 0.0574711             | torch.Size([384])                |
| 154     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.2.dwconv.1                | running_mean        | torch.float32 |         | -4.2305803        | 4.4941936        | 0.0519098      | 0.8561823             | torch.Size([384])                |
| 154     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.2.dwconv.1                | running_var         | torch.float32 |         | 0.3068053         | 16.3709278       | 2.4875016      | 4.7524257             | torch.Size([384])                |
| 154     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.2.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 154     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.2.dwconv.1                | output              | torch.float32 |         | -9.0541544        | 9.3282347        | 0.0034285      | 1.0263414             | torch.Size([12, 384, 8, 22])     |
| 155     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv1                 | input               | torch.float32 |         | -9.0541544        | 9.3282347        | 0.0034285      | 1.0263414             | torch.Size([12, 384, 8, 22])     |
| 155     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv1                 | weight              | torch.float32 |         | -0.3282626        | 0.4871326        | -0.0007906     | 0.0044548             | torch.Size([1152, 384, 1, 1])    |
| 155     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv1                 | bias                | torch.float32 |         | -0.3262930        | 0.0915803        | -0.1388855     | 0.0039567             | torch.Size([1152])               |
| 155     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv1                 | output              | torch.float32 |         | -18.5548363       | 14.3497143       | -1.7347211     | 4.2991581             | torch.Size([12, 1152, 8, 22])    |
| 156     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.2.act                     | input               | torch.float32 |         | -18.5548363       | 14.3497143       | -1.7347211     | 4.2991581             | torch.Size([12, 1152, 8, 22])    |
| 156     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.2.act                     | output              | torch.float32 |         | -0.1699712        | 14.3497143       | 0.1206209      | 0.3213870             | torch.Size([12, 1152, 8, 22])    |
| 157     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 14.3497143       | 0.1206209      | 0.3213870             | torch.Size([12, 1152, 8, 22])    |
| 157     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv2                 | weight              | torch.float32 |         | -0.3364568        | 0.3232291        | -0.0003826     | 0.0040731             | torch.Size([384, 1152, 1, 1])    |
| 157     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv2                 | bias                | torch.float32 |         | -0.2285442        | 0.2163669        | -0.0054767     | 0.0068807             | torch.Size([384])                |
| 157     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.2.pwconv2                 | output              | torch.float32 |         | -37.6273766       | 41.9238548       | -0.0835801     | 6.8157206             | torch.Size([12, 384, 8, 22])     |
| 158     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.2.layer_scale             | input               | torch.float32 |         | -37.6273766       | 41.9238548       | -0.0835801     | 6.8157206             | torch.Size([12, 384, 8, 22])     |
| 158     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.2.layer_scale             | weight              | torch.float32 |         | 0.7758458         | 1.2150631        | 0.9807589      | 0.0052489             | torch.Size([384])                |
| 158     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.2.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 158     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.2.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 158     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.2.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 158     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.2.layer_scale             | output              | torch.float32 |         | -38.0085793       | 45.6185684       | -0.0791870     | 6.7326288             | torch.Size([12, 384, 8, 22])     |
| 159     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.2.add                     | input_0             | torch.float32 |         | -25.1033325       | 30.7690468       | -0.0149619     | 12.6348553            | torch.Size([12, 384, 8, 22])     |
| 159     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.2.add                     | input_1             | torch.float32 |         | -38.0085793       | 45.6185684       | -0.0791870     | 6.7326288             | torch.Size([12, 384, 8, 22])     |
| 159     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.2.add                     | output              | torch.float32 |         | -51.6274719       | 53.6494598       | -0.0941489     | 24.3052082            | torch.Size([12, 384, 8, 22])     |
| 160     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.2.extra_act               | input               | torch.float32 |         | -51.6274719       | 53.6494598       | -0.0941489     | 24.3052082            | torch.Size([12, 384, 8, 22])     |
| 160     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.2.extra_act               | output              | torch.float32 |         | -51.6274719       | 53.6494598       | -0.0941489     | 24.3052082            | torch.Size([12, 384, 8, 22])     |
| 161     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.dwconv.0                | input               | torch.float32 |         | -51.6274719       | 53.6494598       | -0.0941489     | 24.3052082            | torch.Size([12, 384, 8, 22])     |
| 161     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.dwconv.0                | weight              | torch.float32 |         | -0.4964695        | 0.5031744        | -0.0002544     | 0.0404580             | torch.Size([384, 1, 3, 3])       |
| 161     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.dwconv.0                | bias                | torch.float32 |         | -0.3941686        | 0.4301554        | -0.0212408     | 0.0364312             | torch.Size([384])                |
| 161     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.dwconv.0                | output              | torch.float32 |         | -34.7379570       | 47.0243988       | -0.0475737     | 7.9173803             | torch.Size([12, 384, 8, 22])     |
| 162     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.3.dwconv.1                | input               | torch.float32 |         | -34.7379570       | 47.0243988       | -0.0475737     | 7.9173803             | torch.Size([12, 384, 8, 22])     |
| 162     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.3.dwconv.1                | weight              | torch.float32 |         | 0.6266670         | 1.2056018        | 0.9294207      | 0.0075260             | torch.Size([384])                |
| 162     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.3.dwconv.1                | bias                | torch.float32 |         | -0.6525788        | 0.5523067        | 0.0069075      | 0.0502501             | torch.Size([384])                |
| 162     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.3.dwconv.1                | running_mean        | torch.float32 |         | -7.3037944        | 8.2616291        | -0.0361176     | 2.1327066             | torch.Size([384])                |
| 162     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.3.dwconv.1                | running_var         | torch.float32 |         | 0.4123816         | 35.1350861       | 4.6197600      | 18.0553493            | torch.Size([384])                |
| 162     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.3.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 162     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.3.dwconv.1                | output              | torch.float32 |         | -11.2868900       | 10.6645489       | 0.0018009      | 1.1106709             | torch.Size([12, 384, 8, 22])     |
| 163     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv1                 | input               | torch.float32 |         | -11.2868900       | 10.6645489       | 0.0018009      | 1.1106709             | torch.Size([12, 384, 8, 22])     |
| 163     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv1                 | weight              | torch.float32 |         | -0.3110211        | 0.4192705        | -0.0005754     | 0.0044924             | torch.Size([1152, 384, 1, 1])    |
| 163     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv1                 | bias                | torch.float32 |         | -0.3628277        | 0.1167385        | -0.1340542     | 0.0047501             | torch.Size([1152])               |
| 163     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv1                 | output              | torch.float32 |         | -22.3275185       | 19.9331760       | -1.7730359     | 5.1204519             | torch.Size([12, 1152, 8, 22])    |
| 164     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.3.act                     | input               | torch.float32 |         | -22.3275185       | 19.9331760       | -1.7730359     | 5.1204519             | torch.Size([12, 1152, 8, 22])    |
| 164     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.3.act                     | output              | torch.float32 |         | -0.1699712        | 19.9331760       | 0.1586003      | 0.4279122             | torch.Size([12, 1152, 8, 22])    |
| 165     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 19.9331760       | 0.1586003      | 0.4279122             | torch.Size([12, 1152, 8, 22])    |
| 165     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv2                 | weight              | torch.float32 |         | -0.5910586        | 0.8817028        | -0.0005804     | 0.0041153             | torch.Size([384, 1152, 1, 1])    |
| 165     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv2                 | bias                | torch.float32 |         | -0.2625186        | 0.1990930        | -0.0069672     | 0.0070934             | torch.Size([384])                |
| 165     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.3.pwconv2                 | output              | torch.float32 |         | -75.9884644       | 70.1313705       | -0.0970316     | 12.0384989            | torch.Size([12, 384, 8, 22])     |
| 166     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.3.layer_scale             | input               | torch.float32 |         | -75.9884644       | 70.1313705       | -0.0970316     | 12.0384989            | torch.Size([12, 384, 8, 22])     |
| 166     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.3.layer_scale             | weight              | torch.float32 |         | 0.8073890         | 1.3613205        | 0.9924643      | 0.0053684             | torch.Size([384])                |
| 166     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.3.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 166     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.3.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 166     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.3.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 166     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.3.layer_scale             | output              | torch.float32 |         | -84.9891891       | 89.8812256       | -0.0928095     | 12.3967552            | torch.Size([12, 384, 8, 22])     |
| 167     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.3.add                     | input_0             | torch.float32 |         | -51.6274719       | 53.6494598       | -0.0941489     | 24.3052082            | torch.Size([12, 384, 8, 22])     |
| 167     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.3.add                     | input_1             | torch.float32 |         | -84.9891891       | 89.8812256       | -0.0928095     | 12.3967552            | torch.Size([12, 384, 8, 22])     |
| 167     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.3.add                     | output              | torch.float32 |         | -104.3233109      | 95.6647415       | -0.1869584     | 46.3560028            | torch.Size([12, 384, 8, 22])     |
| 168     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.3.extra_act               | input               | torch.float32 |         | -104.3233109      | 95.6647415       | -0.1869584     | 46.3560028            | torch.Size([12, 384, 8, 22])     |
| 168     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.3.extra_act               | output              | torch.float32 |         | -104.3233109      | 95.6647415       | -0.1869584     | 46.3560028            | torch.Size([12, 384, 8, 22])     |
| 169     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.dwconv.0                | input               | torch.float32 |         | -104.3233109      | 95.6647415       | -0.1869584     | 46.3560028            | torch.Size([12, 384, 8, 22])     |
| 169     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.dwconv.0                | weight              | torch.float32 |         | -0.5123891        | 0.4847519        | 0.0001886      | 0.0406077             | torch.Size([384, 1, 3, 3])       |
| 169     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.dwconv.0                | bias                | torch.float32 |         | -0.4466549        | 0.3955517        | -0.0070944     | 0.0369587             | torch.Size([384])                |
| 169     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.dwconv.0                | output              | torch.float32 |         | -57.9760170       | 72.9562912       | -0.0199572     | 13.8854532            | torch.Size([12, 384, 8, 22])     |
| 170     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.4.dwconv.1                | input               | torch.float32 |         | -57.9760170       | 72.9562912       | -0.0199572     | 13.8854532            | torch.Size([12, 384, 8, 22])     |
| 170     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.4.dwconv.1                | weight              | torch.float32 |         | 0.6205043         | 1.2568533        | 0.9487319      | 0.0099456             | torch.Size([384])                |
| 170     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.4.dwconv.1                | bias                | torch.float32 |         | -0.7380947        | 0.5920718        | -0.0141378     | 0.0464673             | torch.Size([384])                |
| 170     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.4.dwconv.1                | running_mean        | torch.float32 |         | -11.5149822       | 12.2873411       | -0.0177850     | 4.0709195             | torch.Size([384])                |
| 170     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.4.dwconv.1                | running_var         | torch.float32 |         | 0.8106103         | 60.0230827       | 7.9165058      | 63.9078979            | torch.Size([384])                |
| 170     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.4.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 170     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.4.dwconv.1                | output              | torch.float32 |         | -12.3777094       | 17.0613060       | -0.0156493     | 1.1568087             | torch.Size([12, 384, 8, 22])     |
| 171     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv1                 | input               | torch.float32 |         | -12.3777094       | 17.0613060       | -0.0156493     | 1.1568087             | torch.Size([12, 384, 8, 22])     |
| 171     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv1                 | weight              | torch.float32 |         | -0.3335321        | 0.4012497        | 0.0012445      | 0.0046102             | torch.Size([1152, 384, 1, 1])    |
| 171     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv1                 | bias                | torch.float32 |         | -0.3734207        | 0.1077710        | -0.1267115     | 0.0052377             | torch.Size([1152])               |
| 171     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv1                 | output              | torch.float32 |         | -23.7922554       | 19.5908852       | -1.6855743     | 5.7093773             | torch.Size([12, 1152, 8, 22])    |
| 172     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.4.act                     | input               | torch.float32 |         | -23.7922554       | 19.5908852       | -1.6855743     | 5.7093773             | torch.Size([12, 1152, 8, 22])    |
| 172     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.4.act                     | output              | torch.float32 |         | -0.1699712        | 19.5908852       | 0.2138140      | 0.6088979             | torch.Size([12, 1152, 8, 22])    |
| 173     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 19.5908852       | 0.2138140      | 0.6088979             | torch.Size([12, 1152, 8, 22])    |
| 173     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv2                 | weight              | torch.float32 |         | -0.4597162        | 0.5593146        | 0.0004072      | 0.0043286             | torch.Size([384, 1152, 1, 1])    |
| 173     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv2                 | bias                | torch.float32 |         | -0.2110099        | 0.2187133        | -0.0031386     | 0.0070439             | torch.Size([384])                |
| 173     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.4.pwconv2                 | output              | torch.float32 |         | -143.3256531      | 136.9882355      | 0.0513261      | 24.6671886            | torch.Size([12, 384, 8, 22])     |
| 174     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.4.layer_scale             | input               | torch.float32 |         | -143.3256531      | 136.9882355      | 0.0513261      | 24.6671886            | torch.Size([12, 384, 8, 22])     |
| 174     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.4.layer_scale             | weight              | torch.float32 |         | 0.7736576         | 1.2761297        | 1.0223523      | 0.0055657             | torch.Size([384])                |
| 174     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.4.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 174     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.4.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 174     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.4.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 174     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.4.layer_scale             | output              | torch.float32 |         | -168.1878510      | 170.0595703      | 0.0539968      | 26.4065437            | torch.Size([12, 384, 8, 22])     |
| 175     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.4.add                     | input_0             | torch.float32 |         | -104.3233109      | 95.6647415       | -0.1869584     | 46.3560028            | torch.Size([12, 384, 8, 22])     |
| 175     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.4.add                     | input_1             | torch.float32 |         | -168.1878510      | 170.0595703      | 0.0539968      | 26.4065437            | torch.Size([12, 384, 8, 22])     |
| 175     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.4.add                     | output              | torch.float32 |         | -181.8943329      | 170.3617096      | -0.1329615     | 91.4104691            | torch.Size([12, 384, 8, 22])     |
| 176     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.4.extra_act               | input               | torch.float32 |         | -181.8943329      | 170.3617096      | -0.1329615     | 91.4104691            | torch.Size([12, 384, 8, 22])     |
| 176     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.4.extra_act               | output              | torch.float32 |         | -181.8943329      | 170.3617096      | -0.1329615     | 91.4104691            | torch.Size([12, 384, 8, 22])     |
| 177     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.dwconv.0                | input               | torch.float32 |         | -181.8943329      | 170.3617096      | -0.1329615     | 91.4104691            | torch.Size([12, 384, 8, 22])     |
| 177     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.dwconv.0                | weight              | torch.float32 |         | -0.5208040        | 0.4685676        | -0.0018370     | 0.0408459             | torch.Size([384, 1, 3, 3])       |
| 177     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.dwconv.0                | bias                | torch.float32 |         | -0.3558065        | 0.4294679        | -0.0050583     | 0.0367929             | torch.Size([384])                |
| 177     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.dwconv.0                | output              | torch.float32 |         | -133.7031250      | 115.3301697      | -0.1651618     | 29.3225708            | torch.Size([12, 384, 8, 22])     |
| 178     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.5.dwconv.1                | input               | torch.float32 |         | -133.7031250      | 115.3301697      | -0.1651618     | 29.3225708            | torch.Size([12, 384, 8, 22])     |
| 178     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.5.dwconv.1                | weight              | torch.float32 |         | 0.6937829         | 1.2412306        | 0.9795475      | 0.0077511             | torch.Size([384])                |
| 178     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.5.dwconv.1                | bias                | torch.float32 |         | -0.3552091        | 0.4477741        | -0.0073360     | 0.0206254             | torch.Size([384])                |
| 178     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.5.dwconv.1                | running_mean        | torch.float32 |         | -14.0749474       | 15.0582809       | -0.1264012     | 7.7943649             | torch.Size([384])                |
| 178     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.5.dwconv.1                | running_var         | torch.float32 |         | 1.3193597         | 354.1184998      | 26.3866386     | 1528.8206787          | torch.Size([384])                |
| 178     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.5.dwconv.1                | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 178     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stages.3.block.5.dwconv.1                | output              | torch.float32 |         | -15.5058861       | 11.8298683       | -0.0179502     | 1.1036434             | torch.Size([12, 384, 8, 22])     |
| 179     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv1                 | input               | torch.float32 |         | -15.5058861       | 11.8298683       | -0.0179502     | 1.1036434             | torch.Size([12, 384, 8, 22])     |
| 179     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv1                 | weight              | torch.float32 |         | -0.3455547        | 0.4153213        | 0.0004850      | 0.0046058             | torch.Size([1152, 384, 1, 1])    |
| 179     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv1                 | bias                | torch.float32 |         | -0.3698098        | 0.1860601        | -0.1037247     | 0.0058884             | torch.Size([1152])               |
| 179     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv1                 | output              | torch.float32 |         | -28.4347076       | 54.9329147       | -1.2606260     | 6.3452358             | torch.Size([12, 1152, 8, 22])    |
| 180     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.5.act                     | input               | torch.float32 |         | -28.4347076       | 54.9329147       | -1.2606260     | 6.3452358             | torch.Size([12, 1152, 8, 22])    |
| 180     | horizon_plugin_pytorch.nn.gelu.GELU                                               | backbone.stages.3.block.5.act                     | output              | torch.float32 |         | -0.1699712        | 54.9329147       | 0.3426177      | 1.2288233             | torch.Size([12, 1152, 8, 22])    |
| 181     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv2                 | input               | torch.float32 |         | -0.1699712        | 54.9329147       | 0.3426177      | 1.2288233             | torch.Size([12, 1152, 8, 22])    |
| 181     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv2                 | weight              | torch.float32 |         | -0.4162114        | 0.3731006        | 0.0018714      | 0.0041657             | torch.Size([384, 1152, 1, 1])    |
| 181     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv2                 | bias                | torch.float32 |         | -0.1051167        | 0.1153578        | -0.0002494     | 0.0014217             | torch.Size([384])                |
| 181     | torch.nn.modules.conv.Conv2d                                                      | backbone.stages.3.block.5.pwconv2                 | output              | torch.float32 |         | -897.1497192      | 912.3861694      | 0.4416084      | 381.3504028           | torch.Size([12, 384, 8, 22])     |
| 182     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.5.layer_scale             | input               | torch.float32 |         | -897.1497192      | 912.3861694      | 0.4416084      | 381.3504028           | torch.Size([12, 384, 8, 22])     |
| 182     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.5.layer_scale             | weight              | torch.float32 |         | 0.8074775         | 1.2679707        | 1.0402448      | 0.0036906             | torch.Size([384])                |
| 182     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.5.layer_scale             | bias                | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 182     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.5.layer_scale             | running_mean        | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([1])                  |
| 182     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.5.layer_scale             | running_var         | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | nan                   | torch.Size([1])                  |
| 182     | horizon_plugin_pytorch.nn.channel_scale.ChannelScale2d                            | backbone.stages.3.block.5.layer_scale             | output              | torch.float32 |         | -986.2431030      | 1066.0578613     | 0.4966821      | 432.3800964           | torch.Size([12, 384, 8, 22])     |
| 183     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.5.add                     | input_0             | torch.float32 |         | -181.8943329      | 170.3617096      | -0.1329615     | 91.4104691            | torch.Size([12, 384, 8, 22])     |
| 183     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.5.add                     | input_1             | torch.float32 |         | -986.2431030      | 1066.0578613     | 0.4966821      | 432.3800964           | torch.Size([12, 384, 8, 22])     |
| 183     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | backbone.stages.3.block.5.add                     | output              | torch.float32 |         | -1026.3469238     | 1118.7966309     | 0.3637206      | 586.7894897           | torch.Size([12, 384, 8, 22])     |
| 184     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.5.extra_act               | input               | torch.float32 |         | -1026.3469238     | 1118.7966309     | 0.3637206      | 586.7894897           | torch.Size([12, 384, 8, 22])     |
| 184     | torch.nn.modules.linear.Identity                                                  | backbone.stages.3.block.5.extra_act               | output              | torch.float32 |         | -1026.3469238     | 1118.7966309     | 0.3637206      | 586.7894897           | torch.Size([12, 384, 8, 22])     |
| 185     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.3                             | input               | torch.float32 |         | -1026.3469238     | 1118.7966309     | 0.3637206      | 586.7894897           | torch.Size([12, 384, 8, 22])     |
| 185     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.3                             | weight              | torch.float32 |         | 0.4716969         | 0.9356748        | 0.7450352      | 0.0049230             | torch.Size([384])                |
| 185     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.3                             | bias                | torch.float32 |         | -0.1242242        | 0.1170985        | 0.0012044      | 0.0011783             | torch.Size([384])                |
| 185     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.3                             | running_mean        | torch.float32 |         | -21.9474144       | 22.9861240       | 0.6532449      | 65.5134888            | torch.Size([384])                |
| 185     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.3                             | running_var         | torch.float32 |         | 114.5144501       | 21094.2441406    | 4989.1074219   | 13216904.0000000      | torch.Size([384])                |
| 185     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.3                             | num_batches_tracked | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | nan                   | torch.Size([])                   |
| 185     | torch.nn.modules.batchnorm.BatchNorm2d                                            | backbone.stage_norm.3                             | output              | torch.float32 |         | -7.0197554        | 9.1743784        | -0.0021509     | 0.1240436             | torch.Size([12, 384, 8, 22])     |
| 186     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.0.0                             | input               | torch.float32 |         | -1.3446839        | 1.5299978        | -0.0022292     | 0.0058741             | torch.Size([12, 64, 64, 176])    |
| 186     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.0.0                             | weight              | torch.float32 |         | -0.3206275        | 0.3504827        | 0.0011598      | 0.0060194             | torch.Size([256, 64, 1, 1])      |
| 186     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.0.0                             | bias                | torch.float32 |         | -0.2086400        | 0.2225119        | 0.0024037      | 0.0032313             | torch.Size([256])                |
| 186     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.0.0                             | output              | torch.float32 |         | -0.6062468        | 0.7511035        | 0.0023183      | 0.0049096             | torch.Size([12, 256, 64, 176])   |
| 187     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.1.0                             | input               | torch.float32 |         | -3.8015001        | 4.0431347        | -0.0035960     | 0.1104387             | torch.Size([12, 128, 32, 88])    |
| 187     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.1.0                             | weight              | torch.float32 |         | -0.3428875        | 0.3670728        | 0.0007555      | 0.0042203             | torch.Size([256, 128, 1, 1])     |
| 187     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.1.0                             | bias                | torch.float32 |         | -0.2329265        | 0.2361577        | 0.0047584      | 0.0071602             | torch.Size([256])                |
| 187     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.1.0                             | output              | torch.float32 |         | -4.0559416        | 4.2024546        | -0.0013089     | 0.0696693             | torch.Size([12, 256, 32, 88])    |
| 188     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.2.0                             | input               | torch.float32 |         | -7.4979534        | 8.5700006        | -0.0048172     | 0.3464860             | torch.Size([12, 192, 16, 44])    |
| 188     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.2.0                             | weight              | torch.float32 |         | -0.1827236        | 0.1774697        | -0.0000924     | 0.0025376             | torch.Size([256, 192, 1, 1])     |
| 188     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.2.0                             | bias                | torch.float32 |         | -0.1729663        | 0.2027678        | -0.0009974     | 0.0027339             | torch.Size([256])                |
| 188     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.2.0                             | output              | torch.float32 |         | -5.4589524        | 6.9643292        | 0.0062005      | 0.1484132             | torch.Size([12, 256, 16, 44])    |
| 189     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.3.0                             | input               | torch.float32 |         | -7.0197554        | 9.1743784        | -0.0021509     | 0.1240436             | torch.Size([12, 384, 8, 22])     |
| 189     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.3.0                             | weight              | torch.float32 |         | -0.1964730        | 0.1978286        | 0.0000471      | 0.0020445             | torch.Size([256, 384, 1, 1])     |
| 189     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.3.0                             | bias                | torch.float32 |         | -0.1620243        | 0.1673113        | 0.0016446      | 0.0019964             | torch.Size([256])                |
| 189     | torch.nn.modules.conv.Conv2d                                                      | neck.conv_extract.3.0                             | output              | torch.float32 |         | -13.3721437       | 9.2418852        | 0.0042781      | 0.1965460             | torch.Size([12, 256, 8, 22])     |
| 190     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | neck.upscale.2                                    | input               | torch.float32 |         | -13.3721437       | 9.2418852        | 0.0042781      | 0.1965460             | torch.Size([12, 256, 8, 22])     |
| 190     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | neck.upscale.2                                    | output              | torch.float32 |         | -13.0397291       | 8.9847841        | 0.0042781      | 0.1528648             | torch.Size([12, 256, 16, 44])    |
| 191     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.0                                   | input_0             | torch.float32 |         | -5.4589524        | 6.9643292        | 0.0062005      | 0.1484132             | torch.Size([12, 256, 16, 44])    |
| 191     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.0                                   | input_1             | torch.float32 |         | -13.0397291       | 8.9847841        | 0.0042781      | 0.1528648             | torch.Size([12, 256, 16, 44])    |
| 191     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.0                                   | output              | torch.float32 |         | -15.8276091       | 10.1838741       | 0.0104786      | 0.3303719             | torch.Size([12, 256, 16, 44])    |
| 192     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | neck.upscale.1                                    | input               | torch.float32 |         | -15.8276091       | 10.1838741       | 0.0104786      | 0.3303719             | torch.Size([12, 256, 16, 44])    |
| 192     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | neck.upscale.1                                    | output              | torch.float32 |         | -15.7591457       | 8.8661709        | 0.0104786      | 0.2828943             | torch.Size([12, 256, 32, 88])    |
| 193     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.1                                   | input_0             | torch.float32 |         | -4.0559416        | 4.2024546        | -0.0013089     | 0.0696693             | torch.Size([12, 256, 32, 88])    |
| 193     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.1                                   | input_1             | torch.float32 |         | -15.7591457       | 8.8661709        | 0.0104786      | 0.2828943             | torch.Size([12, 256, 32, 88])    |
| 193     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.1                                   | output              | torch.float32 |         | -15.9949579       | 9.8342552        | 0.0091696      | 0.2822249             | torch.Size([12, 256, 32, 88])    |
| 194     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | neck.upscale.0                                    | input               | torch.float32 |         | -15.9949579       | 9.8342552        | 0.0091696      | 0.2822249             | torch.Size([12, 256, 32, 88])    |
| 194     | horizon_plugin_pytorch.nn.interpolate.autocasted_interpolate_outer                | neck.upscale.0                                    | output              | torch.float32 |         | -15.9229956       | 9.7842522        | 0.0091696      | 0.2650222             | torch.Size([12, 256, 64, 176])   |
| 195     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.2                                   | input_0             | torch.float32 |         | -0.6062468        | 0.7511035        | 0.0023183      | 0.0049096             | torch.Size([12, 256, 64, 176])   |
| 195     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.2                                   | input_1             | torch.float32 |         | -15.9229956       | 9.7842522        | 0.0091696      | 0.2650222             | torch.Size([12, 256, 64, 176])   |
| 195     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | neck.conv_add.2                                   | output              | torch.float32 |         | -15.9101229       | 9.9605494        | 0.0114879      | 0.2701745             | torch.Size([12, 256, 64, 176])   |
| 196     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.0.0                                 | input               | torch.float32 |         | -15.9101229       | 9.9605494        | 0.0114879      | 0.2701745             | torch.Size([12, 256, 64, 176])   |
| 196     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.0.0                                 | weight              | torch.float32 |         | -0.2571002        | 0.2533301        | -0.0000001     | 0.0002334             | torch.Size([256, 256, 3, 3])     |
| 196     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.0.0                                 | bias                | torch.float32 |         | -0.1612954        | 0.1691896        | 0.0023394      | 0.0014786             | torch.Size([256])                |
| 196     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.0.0                                 | output              | torch.float32 |         | -23.8790340       | 27.4227791       | -0.0150849     | 0.9035700             | torch.Size([12, 256, 64, 176])   |
| 197     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.1.0                                 | input               | torch.float32 |         | -15.9949579       | 9.8342552        | 0.0091696      | 0.2822249             | torch.Size([12, 256, 32, 88])    |
| 197     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.1.0                                 | weight              | torch.float32 |         | -0.2879935        | 0.3221029        | 0.0000086      | 0.0002552             | torch.Size([256, 256, 3, 3])     |
| 197     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.1.0                                 | bias                | torch.float32 |         | -0.2607886        | 0.2473673        | -0.0079984     | 0.0032435             | torch.Size([256])                |
| 197     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.1.0                                 | output              | torch.float32 |         | -23.2577324       | 19.3163338       | -0.0147372     | 0.6819058             | torch.Size([12, 256, 32, 88])    |
| 198     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.2.0                                 | input               | torch.float32 |         | -15.8276091       | 10.1838741       | 0.0104786      | 0.3303719             | torch.Size([12, 256, 16, 44])    |
| 198     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.2.0                                 | weight              | torch.float32 |         | -0.2914507        | 0.2987113        | -0.0000730     | 0.0020421             | torch.Size([256, 256, 3, 3])     |
| 198     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.2.0                                 | bias                | torch.float32 |         | -0.2858557        | 0.3223354        | -0.0022723     | 0.0132826             | torch.Size([256])                |
| 198     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.2.0                                 | output              | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 199     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.3.0                                 | input               | torch.float32 |         | -13.3721437       | 9.2418852        | 0.0042781      | 0.1965460             | torch.Size([12, 256, 8, 22])     |
| 199     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.3.0                                 | weight              | torch.float32 |         | -0.0208302        | 0.0208302        | -0.0000112     | 0.0001450             | torch.Size([256, 256, 3, 3])     |
| 199     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.3.0                                 | bias                | torch.float32 |         | -0.0207248        | 0.0207744        | 0.0003362      | 0.0001384             | torch.Size([256])                |
| 199     | torch.nn.modules.conv.Conv2d                                                      | neck.fpn_conv.3.0                                 | output              | torch.float32 |         | -2.5367990        | 2.5418892        | -0.0007897     | 0.0587262             | torch.Size([12, 256, 8, 22])     |
| 200     | torch.Tensor.float                                                                | head                                              | input               | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 200     | torch.Tensor.float                                                                | head                                              | output              | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 201     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub              | input               | torch.float32 |         | -52.9582825       | 52.8438606       | 0.6379549      | 103.1539612           | torch.Size([2, 384, 11])         |
| 201     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub              | output              | torch.float32 |         | -52.9582825       | 52.8438606       | 0.6379549      | 103.1539612           | torch.Size([2, 384, 11])         |
| 202     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub    | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 256])        |
| 202     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub    | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 256])        |
| 203     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub(1)           | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 11])         |
| 203     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.anchor_quant_stub(1)           | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 11])         |
| 204     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub(1) | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 256])        |
| 204     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.instance_bank.instance_feature_quant_stub(1) | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 256])        |
| 205     | torch.clamp                                                                       | head                                              | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 11])         |
| 205     | torch.clamp                                                                       | head                                              | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 11])         |
| 206     | torch.clamp                                                                       | head                                              | input               | torch.float32 |         | -52.9582825       | 52.8438606       | 0.6379549      | 103.1539612           | torch.Size([2, 384, 11])         |
| 206     | torch.clamp                                                                       | head                                              | output              | torch.float32 |         | -52.9582825       | 52.8438606       | 0.6379549      | 103.1539612           | torch.Size([2, 384, 11])         |
| 207     | torch.Tensor.__getitem__                                                          | head                                              | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 11])         |
| 207     | torch.Tensor.__getitem__                                                          | head                                              | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 128, 11])         |
| 208     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.instance_bank.anchor_cat                     | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 128, 11])         |
| 208     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.instance_bank.anchor_cat                     | input_1             | torch.float32 |         | -52.9582825       | 52.8438606       | 0.6379549      | 103.1539612           | torch.Size([2, 384, 11])         |
| 208     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.instance_bank.anchor_cat                     | output              | torch.float32 |         | -52.9582825       | 52.8438606       | 0.4784662      | 77.4394913            | torch.Size([2, 512, 11])         |
| 209     | torch.Tensor.__getitem__                                                          | head                                              | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 256])        |
| 209     | torch.Tensor.__getitem__                                                          | head                                              | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 128, 256])        |
| 210     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.instance_bank.feature_cat                    | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 128, 256])        |
| 210     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.instance_bank.feature_cat                    | input_1             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 256])        |
| 210     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.instance_bank.feature_cat                    | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 512, 256])        |
| 211     | torch.Tensor.__getitem__                                                          | head                                              | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 11])         |
| 211     | torch.Tensor.__getitem__                                                          | head                                              | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 11])         |
| 212     | torch.Tensor.__getitem__                                                          | head                                              | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 384, 256])        |
| 212     | torch.Tensor.__getitem__                                                          | head                                              | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 213     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -52.9582825       | 52.8438606       | 0.4784662      | 77.4394913            | torch.Size([2, 512, 11])         |
| 213     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -52.9582825       | 52.8438606       | 1.0650947      | 283.1617432           | torch.Size([2, 512, 3])          |
| 214     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0                      | input               | torch.float32 |         | -52.9582825       | 52.8438606       | 1.0650947      | 283.1617432           | torch.Size([2, 512, 3])          |
| 214     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0                      | weight              | torch.float32 |         | -0.9216561        | 0.9167990        | -0.0046354     | 0.1373587             | torch.Size([128, 3])             |
| 214     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0                      | bias                | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3650480             | torch.Size([128])                |
| 214     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0                      | output              | torch.float32 |         | -32.8705482       | 33.7859650       | -0.1010608     | 69.6433105            | torch.Size([2, 512, 128])        |
| 215     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1                      | input               | torch.float32 |         | 0.0000000         | 33.7859650       | 2.8414721      | 25.7705173            | torch.Size([2, 512, 128])        |
| 215     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1                      | output              | torch.float32 |         | 0.0000000         | 33.7859650       | 2.8414721      | 25.7705173            | torch.Size([2, 512, 128])        |
| 216     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 33.7859650       | 2.8414721      | 25.7705173            | torch.Size([2, 512, 128])        |
| 216     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean      | output              | torch.float32 |         | 0.2505170         | 7.3323078        | 2.8414721      | 4.2550664             | torch.Size([2, 512, 1])          |
| 217     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub                  | input_0             | torch.float32 |         | 0.0000000         | 33.7859650       | 2.8414721      | 25.7705173            | torch.Size([2, 512, 128])        |
| 217     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub                  | input_1             | torch.float32 |         | 0.2505170         | 7.3323078        | 2.8414721      | 4.2550664             | torch.Size([2, 512, 1])          |
| 217     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub                  | output              | torch.float32 |         | -7.3323078        | 28.2930889       | 0.0000000      | 21.5195732            | torch.Size([2, 512, 128])        |
| 218     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul                  | input_0             | torch.float32 |         | -7.3323078        | 28.2930889       | 0.0000000      | 21.5195732            | torch.Size([2, 512, 128])        |
| 218     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul                  | input_1             | torch.float32 |         | -7.3323078        | 28.2930889       | 0.0000000      | 21.5195732            | torch.Size([2, 512, 128])        |
| 218     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul                  | output              | torch.float32 |         | 0.0000001         | 800.4989014      | 21.5194092     | 2628.4338379          | torch.Size([2, 512, 128])        |
| 219     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean        | input_0             | torch.float32 |         | 0.0000001         | 800.4989014      | 21.5194092     | 2628.4338379          | torch.Size([2, 512, 128])        |
| 219     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean        | output              | torch.float32 |         | 0.1060732         | 80.2254257       | 21.5194092     | 488.1299744           | torch.Size([2, 512, 1])          |
| 220     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt                | input               | torch.float32 |         | 0.1060732         | 80.2254257       | 21.5194092     | 488.1299744           | torch.Size([2, 512, 1])          |
| 220     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt                | output              | torch.float32 |         | 0.1116462         | 3.0702708        | 0.9698637      | 1.4997298             | torch.Size([2, 512, 1])          |
| 221     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul              | input_0             | torch.float32 |         | -7.3323078        | 28.2930889       | 0.0000000      | 21.5195732            | torch.Size([2, 512, 128])        |
| 221     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul              | input_1             | torch.float32 |         | 0.1116462         | 3.0702708        | 0.9698637      | 1.4997298             | torch.Size([2, 512, 1])          |
| 221     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul              | output              | torch.float32 |         | -0.8855032        | 3.7455420        | 0.0000000      | 0.9999833             | torch.Size([2, 512, 128])        |
| 222     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant         | input               | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 222     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant         | output              | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 223     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul           | input_0             | torch.float32 |         | -0.8855032        | 3.7455420        | 0.0000000      | 0.9999833             | torch.Size([2, 512, 128])        |
| 223     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul           | input_1             | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 223     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul           | output              | torch.float32 |         | -1.0555075        | 3.6527159        | 0.0007295      | 0.9401610             | torch.Size([2, 512, 128])        |
| 224     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant           | input               | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 224     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant           | output              | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 225     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add             | input_0             | torch.float32 |         | -1.0555075        | 3.6527159        | 0.0007295      | 0.9401610             | torch.Size([2, 512, 128])        |
| 225     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add             | input_1             | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 225     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add             | output              | torch.float32 |         | -1.0592663        | 3.6238379        | 0.0095499      | 0.9332322             | torch.Size([2, 512, 128])        |
| 226     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3                      | input               | torch.float32 |         | -1.0592663        | 3.6238379        | 0.0095499      | 0.9332322             | torch.Size([2, 512, 128])        |
| 226     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3                      | weight              | torch.float32 |         | -0.3750711        | 0.3968706        | 0.0019093      | 0.0048458             | torch.Size([128, 128])           |
| 226     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3                      | bias                | torch.float32 |         | -0.1863807        | 0.1385574        | -0.0156467     | 0.0047256             | torch.Size([128])                |
| 226     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3                      | output              | torch.float32 |         | -6.8171039        | 8.4221592        | -0.1041638     | 3.5761409             | torch.Size([2, 512, 128])        |
| 227     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4                      | input               | torch.float32 |         | 0.0000000         | 8.4221592        | 0.6505614      | 1.3643199             | torch.Size([2, 512, 128])        |
| 227     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4                      | output              | torch.float32 |         | 0.0000000         | 8.4221592        | 0.6505614      | 1.3643199             | torch.Size([2, 512, 128])        |
| 228     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 8.4221592        | 0.6505614      | 1.3643199             | torch.Size([2, 512, 128])        |
| 228     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean      | output              | torch.float32 |         | 0.2870104         | 1.3435298        | 0.6505615      | 0.1656583             | torch.Size([2, 512, 1])          |
| 229     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub                  | input_0             | torch.float32 |         | 0.0000000         | 8.4221592        | 0.6505614      | 1.3643199             | torch.Size([2, 512, 128])        |
| 229     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub                  | input_1             | torch.float32 |         | 0.2870104         | 1.3435298        | 0.6505615      | 0.1656583             | torch.Size([2, 512, 1])          |
| 229     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub                  | output              | torch.float32 |         | -1.3435298        | 7.0786295        | 0.0000000      | 1.1988224             | torch.Size([2, 512, 128])        |
| 230     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul                  | input_0             | torch.float32 |         | -1.3435298        | 7.0786295        | 0.0000000      | 1.1988224             | torch.Size([2, 512, 128])        |
| 230     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul                  | input_1             | torch.float32 |         | -1.3435298        | 7.0786295        | 0.0000000      | 1.1988224             | torch.Size([2, 512, 128])        |
| 230     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul                  | output              | torch.float32 |         | 0.0000000         | 50.1069946       | 1.1988132      | 11.8350477            | torch.Size([2, 512, 128])        |
| 231     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean        | input_0             | torch.float32 |         | 0.0000000         | 50.1069946       | 1.1988132      | 11.8350477            | torch.Size([2, 512, 128])        |
| 231     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean        | output              | torch.float32 |         | 0.3041418         | 3.2988710        | 1.1988133      | 1.4839764             | torch.Size([2, 512, 1])          |
| 232     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt                | input               | torch.float32 |         | 0.3041418         | 3.2988710        | 1.1988133      | 1.4839764             | torch.Size([2, 512, 1])          |
| 232     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt                | output              | torch.float32 |         | 0.5505753         | 1.8132380        | 1.2239741      | 0.1746419             | torch.Size([2, 512, 1])          |
| 233     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul              | input_0             | torch.float32 |         | -1.3435298        | 7.0786295        | 0.0000000      | 1.1988224             | torch.Size([2, 512, 128])        |
| 233     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul              | input_1             | torch.float32 |         | 0.5505753         | 1.8132380        | 1.2239741      | 0.1746419             | torch.Size([2, 512, 1])          |
| 233     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul              | output              | torch.float32 |         | -0.7635512        | 6.9184022        | 0.0000000      | 0.9999909             | torch.Size([2, 512, 128])        |
| 234     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant         | input               | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 234     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant         | output              | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 235     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul           | input_0             | torch.float32 |         | -0.7635512        | 6.9184022        | 0.0000000      | 0.9999909             | torch.Size([2, 512, 128])        |
| 235     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul           | input_1             | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 235     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul           | output              | torch.float32 |         | -0.8654347        | 6.7982092        | 0.0389623      | 0.9723351             | torch.Size([2, 512, 128])        |
| 236     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant           | input               | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 236     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant           | output              | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 237     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add             | input_0             | torch.float32 |         | -0.8654347        | 6.7982092        | 0.0389623      | 0.9723351             | torch.Size([2, 512, 128])        |
| 237     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add             | input_1             | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 237     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add             | output              | torch.float32 |         | -0.8922539        | 6.7946649        | 0.0707646      | 0.9470225             | torch.Size([2, 512, 128])        |
| 238     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6                      | input               | torch.float32 |         | -0.8922539        | 6.7946649        | 0.0707646      | 0.9470225             | torch.Size([2, 512, 128])        |
| 238     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6                      | weight              | torch.float32 |         | -0.7504157        | 0.4182976        | -0.0024651     | 0.0052447             | torch.Size([128, 128])           |
| 238     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6                      | bias                | torch.float32 |         | -0.1397866        | 0.1210779        | 0.0064616      | 0.0040949             | torch.Size([128])                |
| 238     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6                      | output              | torch.float32 |         | -10.2651653       | 7.1977792        | -0.0237549     | 5.5593996             | torch.Size([2, 512, 128])        |
| 239     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7                      | input               | torch.float32 |         | 0.0000000         | 7.1977792        | 0.8819088      | 1.7592014             | torch.Size([2, 512, 128])        |
| 239     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7                      | output              | torch.float32 |         | 0.0000000         | 7.1977792        | 0.8819088      | 1.7592014             | torch.Size([2, 512, 128])        |
| 240     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 7.1977792        | 0.8819088      | 1.7592014             | torch.Size([2, 512, 128])        |
| 240     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean      | output              | torch.float32 |         | 0.5553229         | 1.4766945        | 0.8819088      | 0.1196283             | torch.Size([2, 512, 1])          |
| 241     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub                  | input_0             | torch.float32 |         | 0.0000000         | 7.1977792        | 0.8819088      | 1.7592014             | torch.Size([2, 512, 128])        |
| 241     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub                  | input_1             | torch.float32 |         | 0.5553229         | 1.4766945        | 0.8819088      | 0.1196283             | torch.Size([2, 512, 1])          |
| 241     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub                  | output              | torch.float32 |         | -1.4766945        | 6.0006638        | 0.0000000      | 1.6396888             | torch.Size([2, 512, 128])        |
| 242     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul                  | input_0             | torch.float32 |         | -1.4766945        | 6.0006638        | 0.0000000      | 1.6396888             | torch.Size([2, 512, 128])        |
| 242     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul                  | input_1             | torch.float32 |         | -1.4766945        | 6.0006638        | 0.0000000      | 1.6396888             | torch.Size([2, 512, 128])        |
| 242     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul                  | output              | torch.float32 |         | 0.0000000         | 36.0079651       | 1.6396766      | 10.6428471            | torch.Size([2, 512, 128])        |
| 243     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean        | input_0             | torch.float32 |         | 0.0000000         | 36.0079651       | 1.6396766      | 10.6428471            | torch.Size([2, 512, 128])        |
| 243     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean        | output              | torch.float32 |         | 0.8256341         | 3.2164650        | 1.6396765      | 0.8467340             | torch.Size([2, 512, 1])          |
| 244     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt                | input               | torch.float32 |         | 0.8256341         | 3.2164650        | 1.6396765      | 0.8467340             | torch.Size([2, 512, 1])          |
| 244     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt                | output              | torch.float32 |         | 0.5575835         | 1.1005342        | 0.8545537      | 0.0322465             | torch.Size([2, 512, 1])          |
| 245     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul              | input_0             | torch.float32 |         | -1.4766945        | 6.0006638        | 0.0000000      | 1.6396888             | torch.Size([2, 512, 128])        |
| 245     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul              | input_1             | torch.float32 |         | 0.5575835         | 1.1005342        | 0.8545537      | 0.0322465             | torch.Size([2, 512, 1])          |
| 245     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul              | output              | torch.float32 |         | -0.8233805        | 5.0346699        | 0.0000000      | 1.0000000             | torch.Size([2, 512, 128])        |
| 246     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant         | input               | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 246     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant         | output              | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 247     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul           | input_0             | torch.float32 |         | -0.8233805        | 5.0346699        | 0.0000000      | 1.0000000             | torch.Size([2, 512, 128])        |
| 247     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul           | input_1             | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 247     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul           | output              | torch.float32 |         | -0.9262874        | 5.2476988        | 0.0164056      | 0.9943537             | torch.Size([2, 512, 128])        |
| 248     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant           | input               | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 248     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant           | output              | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 249     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add             | input_0             | torch.float32 |         | -0.9262874        | 5.2476988        | 0.0164056      | 0.9943537             | torch.Size([2, 512, 128])        |
| 249     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add             | input_1             | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 249     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add             | output              | torch.float32 |         | -0.9123854        | 5.2720165        | 0.0380436      | 0.9781969             | torch.Size([2, 512, 128])        |
| 250     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9                      | input               | torch.float32 |         | -0.9123854        | 5.2720165        | 0.0380436      | 0.9781969             | torch.Size([2, 512, 128])        |
| 250     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9                      | weight              | torch.float32 |         | -0.4264432        | 0.3183554        | 0.0005866      | 0.0053991             | torch.Size([128, 128])           |
| 250     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9                      | bias                | torch.float32 |         | -0.1690418        | 0.1536980        | -0.0166056     | 0.0039884             | torch.Size([128])                |
| 250     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9                      | output              | torch.float32 |         | -11.7120199       | 10.8796616       | -0.4205123     | 4.4727020             | torch.Size([2, 512, 128])        |
| 251     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10                     | input               | torch.float32 |         | 0.0000000         | 10.8796616       | 0.6212362      | 1.5325598             | torch.Size([2, 512, 128])        |
| 251     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10                     | output              | torch.float32 |         | 0.0000000         | 10.8796616       | 0.6212362      | 1.5325598             | torch.Size([2, 512, 128])        |
| 252     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 10.8796616       | 0.6212362      | 1.5325598             | torch.Size([2, 512, 128])        |
| 252     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean     | output              | torch.float32 |         | 0.5235982         | 0.7326194        | 0.6212362      | 0.0019704             | torch.Size([2, 512, 1])          |
| 253     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub                 | input_0             | torch.float32 |         | 0.0000000         | 10.8796616       | 0.6212362      | 1.5325598             | torch.Size([2, 512, 128])        |
| 253     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub                 | input_1             | torch.float32 |         | 0.5235982         | 0.7326194        | 0.6212362      | 0.0019704             | torch.Size([2, 512, 1])          |
| 253     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub                 | output              | torch.float32 |         | -0.7326194        | 10.3119097       | -0.0000000     | 1.5305912             | torch.Size([2, 512, 128])        |
| 254     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul                 | input_0             | torch.float32 |         | -0.7326194        | 10.3119097       | -0.0000000     | 1.5305912             | torch.Size([2, 512, 128])        |
| 254     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul                 | input_1             | torch.float32 |         | -0.7326194        | 10.3119097       | -0.0000000     | 1.5305912             | torch.Size([2, 512, 128])        |
| 254     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul                 | output              | torch.float32 |         | 0.0000000         | 106.3354797      | 1.5305796      | 26.0120506            | torch.Size([2, 512, 128])        |
| 255     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 106.3354797      | 1.5305796      | 26.0120506            | torch.Size([2, 512, 128])        |
| 255     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean       | output              | torch.float32 |         | 1.1000538         | 1.9548680        | 1.5305796      | 0.0716901             | torch.Size([2, 512, 1])          |
| 256     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt               | input               | torch.float32 |         | 1.1000538         | 1.9548680        | 1.5305796      | 0.0716901             | torch.Size([2, 512, 1])          |
| 256     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt               | output              | torch.float32 |         | 0.7152208         | 0.9534349        | 0.8186002      | 0.0060895             | torch.Size([2, 512, 1])          |
| 257     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul             | input_0             | torch.float32 |         | -0.7326194        | 10.3119097       | -0.0000000     | 1.5305912             | torch.Size([2, 512, 128])        |
| 257     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul             | input_1             | torch.float32 |         | 0.7152208         | 0.9534349        | 0.8186002      | 0.0060895             | torch.Size([2, 512, 1])          |
| 257     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul             | output              | torch.float32 |         | -0.6432250        | 7.9612322        | -0.0000000     | 1.0000010             | torch.Size([2, 512, 128])        |
| 258     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant        | input               | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 258     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant        | output              | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 259     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul          | input_0             | torch.float32 |         | -0.6432250        | 7.9612322        | -0.0000000     | 1.0000010             | torch.Size([2, 512, 128])        |
| 259     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul          | input_1             | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 259     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul          | output              | torch.float32 |         | -0.8570378        | 8.0418873        | 0.0115652      | 0.9081774             | torch.Size([2, 512, 128])        |
| 260     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant          | input               | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 260     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant          | output              | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 261     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add            | input_0             | torch.float32 |         | -0.8570378        | 8.0418873        | 0.0115652      | 0.9081774             | torch.Size([2, 512, 128])        |
| 261     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add            | input_1             | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 261     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add            | output              | torch.float32 |         | -0.8540859        | 7.9945936        | 0.0735555      | 0.8713833             | torch.Size([2, 512, 128])        |
| 262     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -52.9582825       | 52.8438606       | 0.4784662      | 77.4394913            | torch.Size([2, 512, 11])         |
| 262     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | 0.0000000         | 1.1317043        | 0.4410388      | 0.1082925             | torch.Size([2, 512, 3])          |
| 263     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0                     | input               | torch.float32 |         | 0.0000000         | 1.1317043        | 0.4410388      | 0.1082925             | torch.Size([2, 512, 3])          |
| 263     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0                     | weight              | torch.float32 |         | -0.8288664        | 0.6362330        | 0.0683853      | 0.1118651             | torch.Size([32, 3])              |
| 263     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0                     | bias                | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1068659             | torch.Size([32])                 |
| 263     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0                     | output              | torch.float32 |         | -1.0328802        | 0.9802916        | 0.1562257      | 0.1822727             | torch.Size([2, 512, 32])         |
| 264     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1                     | input               | torch.float32 |         | 0.0000000         | 0.9802916        | 0.2740504      | 0.0741340             | torch.Size([2, 512, 32])         |
| 264     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1                     | output              | torch.float32 |         | 0.0000000         | 0.9802916        | 0.2740504      | 0.0741340             | torch.Size([2, 512, 32])         |
| 265     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 0.9802916        | 0.2740504      | 0.0741340             | torch.Size([2, 512, 32])         |
| 265     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean     | output              | torch.float32 |         | 0.1793103         | 0.3239300        | 0.2740503      | 0.0030168             | torch.Size([2, 512, 1])          |
| 266     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub                 | input_0             | torch.float32 |         | 0.0000000         | 0.9802916        | 0.2740504      | 0.0741340             | torch.Size([2, 512, 32])         |
| 266     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub                 | input_1             | torch.float32 |         | 0.1793103         | 0.3239300        | 0.2740503      | 0.0030168             | torch.Size([2, 512, 1])          |
| 266     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub                 | output              | torch.float32 |         | -0.3239300        | 0.6563616        | 0.0000000      | 0.0711201             | torch.Size([2, 512, 32])         |
| 267     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul                 | input_0             | torch.float32 |         | -0.3239300        | 0.6563616        | 0.0000000      | 0.0711201             | torch.Size([2, 512, 32])         |
| 267     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul                 | input_1             | torch.float32 |         | -0.3239300        | 0.6563616        | 0.0000000      | 0.0711201             | torch.Size([2, 512, 32])         |
| 267     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul                 | output              | torch.float32 |         | 0.0000000         | 0.4308106        | 0.0711179      | 0.0067853             | torch.Size([2, 512, 32])         |
| 268     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 0.4308106        | 0.0711179      | 0.0067853             | torch.Size([2, 512, 32])         |
| 268     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean       | output              | torch.float32 |         | 0.0319300         | 0.1034354        | 0.0711179      | 0.0005343             | torch.Size([2, 512, 1])          |
| 269     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt               | input               | torch.float32 |         | 0.0319300         | 0.1034354        | 0.0711179      | 0.0005343             | torch.Size([2, 512, 1])          |
| 269     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt               | output              | torch.float32 |         | 3.1091697         | 5.5954180        | 3.9874520      | 0.8710560             | torch.Size([2, 512, 1])          |
| 270     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul             | input_0             | torch.float32 |         | -0.3239300        | 0.6563616        | 0.0000000      | 0.0711201             | torch.Size([2, 512, 32])         |
| 270     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul             | input_1             | torch.float32 |         | 3.1091697         | 5.5954180        | 3.9874520      | 0.8710560             | torch.Size([2, 512, 1])          |
| 270     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul             | output              | torch.float32 |         | -1.0776136        | 2.2055702        | 0.0000000      | 0.9998628             | torch.Size([2, 512, 32])         |
| 271     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant        | input               | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 271     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant        | output              | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 272     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul          | input_0             | torch.float32 |         | -1.0776136        | 2.2055702        | 0.0000000      | 0.9998628             | torch.Size([2, 512, 32])         |
| 272     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul          | input_1             | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 272     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul          | output              | torch.float32 |         | -1.1563736        | 2.1535890        | -0.0013625     | 0.9411972             | torch.Size([2, 512, 32])         |
| 273     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant          | input               | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 273     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant          | output              | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 274     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add            | input_0             | torch.float32 |         | -1.1563736        | 2.1535890        | -0.0013625     | 0.9411972             | torch.Size([2, 512, 32])         |
| 274     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add            | input_1             | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 274     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add            | output              | torch.float32 |         | -1.1335355        | 2.1481097        | 0.0021637      | 0.8620130             | torch.Size([2, 512, 32])         |
| 275     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3                     | input               | torch.float32 |         | -1.1335355        | 2.1481097        | 0.0021637      | 0.8620130             | torch.Size([2, 512, 32])         |
| 275     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3                     | weight              | torch.float32 |         | -0.5793310        | 0.5422795        | -0.0032135     | 0.0176575             | torch.Size([32, 32])             |
| 275     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3                     | bias                | torch.float32 |         | -0.1716317        | 0.2230143        | 0.0007250      | 0.0126328             | torch.Size([32])                 |
| 275     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3                     | output              | torch.float32 |         | -3.1219692        | 2.0599036        | -0.1135476     | 1.0963038             | torch.Size([2, 512, 32])         |
| 276     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4                     | input               | torch.float32 |         | 0.0000000         | 2.0599036        | 0.3592293      | 0.2239013             | torch.Size([2, 512, 32])         |
| 276     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4                     | output              | torch.float32 |         | 0.0000000         | 2.0599036        | 0.3592293      | 0.2239013             | torch.Size([2, 512, 32])         |
| 277     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 2.0599036        | 0.3592293      | 0.2239013             | torch.Size([2, 512, 32])         |
| 277     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean     | output              | torch.float32 |         | 0.3289151         | 0.4206609        | 0.3592293      | 0.0012777             | torch.Size([2, 512, 1])          |
| 278     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub                 | input_0             | torch.float32 |         | 0.0000000         | 2.0599036        | 0.3592293      | 0.2239013             | torch.Size([2, 512, 32])         |
| 278     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub                 | input_1             | torch.float32 |         | 0.3289151         | 0.4206609        | 0.3592293      | 0.0012777             | torch.Size([2, 512, 1])          |
| 278     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub                 | output              | torch.float32 |         | -0.4206609        | 1.6392426        | -0.0000000     | 0.2226247             | torch.Size([2, 512, 32])         |
| 279     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul                 | input_0             | torch.float32 |         | -0.4206609        | 1.6392426        | -0.0000000     | 0.2226247             | torch.Size([2, 512, 32])         |
| 279     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul                 | input_1             | torch.float32 |         | -0.4206609        | 1.6392426        | -0.0000000     | 0.2226247             | torch.Size([2, 512, 32])         |
| 279     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul                 | output              | torch.float32 |         | 0.0000000         | 2.6871164        | 0.2226179      | 0.1415196             | torch.Size([2, 512, 32])         |
| 280     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 2.6871164        | 0.2226179      | 0.1415196             | torch.Size([2, 512, 32])         |
| 280     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean       | output              | torch.float32 |         | 0.1639007         | 0.3537644        | 0.2226179      | 0.0058522             | torch.Size([2, 512, 1])          |
| 281     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt               | input               | torch.float32 |         | 0.1639007         | 0.3537644        | 0.2226179      | 0.0058522             | torch.Size([2, 512, 1])          |
| 281     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt               | output              | torch.float32 |         | 1.6812674         | 2.4699962        | 2.1963072      | 0.0927206             | torch.Size([2, 512, 1])          |
| 282     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul             | input_0             | torch.float32 |         | -0.4206609        | 1.6392426        | -0.0000000     | 0.2226247             | torch.Size([2, 512, 32])         |
| 282     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul             | input_1             | torch.float32 |         | 1.6812674         | 2.4699962        | 2.1963072      | 0.0927206             | torch.Size([2, 512, 1])          |
| 282     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul             | output              | torch.float32 |         | -0.8434560        | 3.3923469        | -0.0000000     | 0.9999814             | torch.Size([2, 512, 32])         |
| 283     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant        | input               | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 283     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant        | output              | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 284     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul          | input_0             | torch.float32 |         | -0.8434560        | 3.3923469        | -0.0000000     | 0.9999814             | torch.Size([2, 512, 32])         |
| 284     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul          | input_1             | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 284     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul          | output              | torch.float32 |         | -0.8640420        | 3.3690979        | 0.0164260      | 1.0015206             | torch.Size([2, 512, 32])         |
| 285     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant          | input               | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 285     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant          | output              | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 286     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add            | input_0             | torch.float32 |         | -0.8640420        | 3.3690979        | 0.0164260      | 1.0015206             | torch.Size([2, 512, 32])         |
| 286     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add            | input_1             | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 286     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add            | output              | torch.float32 |         | -0.8726251        | 3.3382714        | 0.0261881      | 0.9483016             | torch.Size([2, 512, 32])         |
| 287     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6                     | input               | torch.float32 |         | -0.8726251        | 3.3382714        | 0.0261881      | 0.9483016             | torch.Size([2, 512, 32])         |
| 287     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6                     | weight              | torch.float32 |         | -0.5712157        | 0.5219681        | -0.0062917     | 0.0166056             | torch.Size([32, 32])             |
| 287     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6                     | bias                | torch.float32 |         | -0.1649730        | 0.2318604        | 0.0253026      | 0.0136139             | torch.Size([32])                 |
| 287     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6                     | output              | torch.float32 |         | -4.2639079        | 2.0640340        | -0.2980174     | 1.5769963             | torch.Size([2, 512, 32])         |
| 288     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7                     | input               | torch.float32 |         | 0.0000000         | 2.0640340        | 0.3524297      | 0.2325202             | torch.Size([2, 512, 32])         |
| 288     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7                     | output              | torch.float32 |         | 0.0000000         | 2.0640340        | 0.3524297      | 0.2325202             | torch.Size([2, 512, 32])         |
| 289     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 2.0640340        | 0.3524297      | 0.2325202             | torch.Size([2, 512, 32])         |
| 289     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean     | output              | torch.float32 |         | 0.2741726         | 0.4793801        | 0.3524297      | 0.0054898             | torch.Size([2, 512, 1])          |
| 290     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub                 | input_0             | torch.float32 |         | 0.0000000         | 2.0640340        | 0.3524297      | 0.2325202             | torch.Size([2, 512, 32])         |
| 290     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub                 | input_1             | torch.float32 |         | 0.2741726         | 0.4793801        | 0.3524297      | 0.0054898             | torch.Size([2, 512, 1])          |
| 290     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub                 | output              | torch.float32 |         | -0.4793801        | 1.5846539        | 0.0000000      | 0.2270356             | torch.Size([2, 512, 32])         |
| 291     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul                 | input_0             | torch.float32 |         | -0.4793801        | 1.5846539        | 0.0000000      | 0.2270356             | torch.Size([2, 512, 32])         |
| 291     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul                 | input_1             | torch.float32 |         | -0.4793801        | 1.5846539        | 0.0000000      | 0.2270356             | torch.Size([2, 512, 32])         |
| 291     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul                 | output              | torch.float32 |         | 0.0000000         | 2.5111279        | 0.2270287      | 0.1384993             | torch.Size([2, 512, 32])         |
| 292     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 2.5111279        | 0.2270287      | 0.1384993             | torch.Size([2, 512, 32])         |
| 292     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean       | output              | torch.float32 |         | 0.1626064         | 0.3560117        | 0.2270287      | 0.0056448             | torch.Size([2, 512, 1])          |
| 293     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt               | input               | torch.float32 |         | 0.1626064         | 0.3560117        | 0.2270287      | 0.0056448             | torch.Size([2, 512, 1])          |
| 293     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt               | output              | torch.float32 |         | 1.6759527         | 2.4798069        | 2.1696067      | 0.0849790             | torch.Size([2, 512, 1])          |
| 294     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul             | input_0             | torch.float32 |         | -0.4793801        | 1.5846539        | 0.0000000      | 0.2270356             | torch.Size([2, 512, 32])         |
| 294     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul             | input_1             | torch.float32 |         | 1.6759527         | 2.4798069        | 2.1696067      | 0.0849790             | torch.Size([2, 512, 1])          |
| 294     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul             | output              | torch.float32 |         | -0.8066008        | 3.2624860        | 0.0000000      | 0.9999826             | torch.Size([2, 512, 32])         |
| 295     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant        | input               | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 295     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant        | output              | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 296     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul          | input_0             | torch.float32 |         | -0.8066008        | 3.2624860        | 0.0000000      | 0.9999826             | torch.Size([2, 512, 32])         |
| 296     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul          | input_1             | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 296     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul          | output              | torch.float32 |         | -0.9127076        | 3.2228432        | 0.0056076      | 1.0129315             | torch.Size([2, 512, 32])         |
| 297     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant          | input               | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 297     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant          | output              | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 298     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add            | input_0             | torch.float32 |         | -0.9127076        | 3.2228432        | 0.0056076      | 1.0129315             | torch.Size([2, 512, 32])         |
| 298     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add            | input_1             | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 298     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add            | output              | torch.float32 |         | -0.8816268        | 3.1793082        | 0.0098038      | 0.9667275             | torch.Size([2, 512, 32])         |
| 299     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9                     | input               | torch.float32 |         | -0.8816268        | 3.1793082        | 0.0098038      | 0.9667275             | torch.Size([2, 512, 32])         |
| 299     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9                     | weight              | torch.float32 |         | -0.3204980        | 0.3365203        | -0.0020388     | 0.0145364             | torch.Size([32, 32])             |
| 299     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9                     | bias                | torch.float32 |         | -0.1559148        | 0.2119379        | 0.0091616      | 0.0105488             | torch.Size([32])                 |
| 299     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9                     | output              | torch.float32 |         | -2.4091580        | 2.2504392        | -0.0813697     | 0.8593249             | torch.Size([2, 512, 32])         |
| 300     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10                    | input               | torch.float32 |         | 0.0000000         | 2.2504392        | 0.3465897      | 0.2596825             | torch.Size([2, 512, 32])         |
| 300     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10                    | output              | torch.float32 |         | 0.0000000         | 2.2504392        | 0.3465897      | 0.2596825             | torch.Size([2, 512, 32])         |
| 301     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean    | input_0             | torch.float32 |         | 0.0000000         | 2.2504392        | 0.3465897      | 0.2596825             | torch.Size([2, 512, 32])         |
| 301     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean    | output              | torch.float32 |         | 0.2960429         | 0.4140567        | 0.3465897      | 0.0004643             | torch.Size([2, 512, 1])          |
| 302     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub                | input_0             | torch.float32 |         | 0.0000000         | 2.2504392        | 0.3465897      | 0.2596825             | torch.Size([2, 512, 32])         |
| 302     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub                | input_1             | torch.float32 |         | 0.2960429         | 0.4140567        | 0.3465897      | 0.0004643             | torch.Size([2, 512, 1])          |
| 302     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub                | output              | torch.float32 |         | -0.4140567        | 1.8935468        | -0.0000000     | 0.2592186             | torch.Size([2, 512, 32])         |
| 303     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul                | input_0             | torch.float32 |         | -0.4140567        | 1.8935468        | -0.0000000     | 0.2592186             | torch.Size([2, 512, 32])         |
| 303     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul                | input_1             | torch.float32 |         | -0.4140567        | 1.8935468        | -0.0000000     | 0.2592186             | torch.Size([2, 512, 32])         |
| 303     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul                | output              | torch.float32 |         | 0.0000000         | 3.5855196        | 0.2592107      | 0.2477277             | torch.Size([2, 512, 32])         |
| 304     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 3.5855196        | 0.2592107      | 0.2477277             | torch.Size([2, 512, 32])         |
| 304     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean      | output              | torch.float32 |         | 0.1921749         | 0.3122948        | 0.2592107      | 0.0012575             | torch.Size([2, 512, 1])          |
| 305     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt              | input               | torch.float32 |         | 0.1921749         | 0.3122948        | 0.2592107      | 0.0012575             | torch.Size([2, 512, 1])          |
| 305     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt              | output              | torch.float32 |         | 1.7894133         | 2.2810791        | 1.9776421      | 0.0176150             | torch.Size([2, 512, 1])          |
| 306     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul            | input_0             | torch.float32 |         | -0.4140567        | 1.8935468        | -0.0000000     | 0.2592186             | torch.Size([2, 512, 32])         |
| 306     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul            | input_1             | torch.float32 |         | 1.7894133         | 2.2810791        | 1.9776421      | 0.0176150             | torch.Size([2, 512, 1])          |
| 306     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul            | output              | torch.float32 |         | -0.8632326        | 3.5528316        | -0.0000000     | 0.9999912             | torch.Size([2, 512, 32])         |
| 307     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant       | input               | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 307     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant       | output              | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 308     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul         | input_0             | torch.float32 |         | -0.8632326        | 3.5528316        | -0.0000000     | 0.9999912             | torch.Size([2, 512, 32])         |
| 308     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul         | input_1             | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 308     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul         | output              | torch.float32 |         | -1.3938733        | 4.6520419        | -0.0616757     | 1.3994594             | torch.Size([2, 512, 32])         |
| 309     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant         | input               | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 309     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant         | output              | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 310     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add           | input_0             | torch.float32 |         | -1.3938733        | 4.6520419        | -0.0616757     | 1.3994594             | torch.Size([2, 512, 32])         |
| 310     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add           | input_1             | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 310     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add           | output              | torch.float32 |         | -1.2276719        | 4.6139283        | -0.0171071     | 1.2521633             | torch.Size([2, 512, 32])         |
| 311     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -52.9582825       | 52.8438606       | 0.4784662      | 77.4394913            | torch.Size([2, 512, 11])         |
| 311     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -0.2428340        | 1.2461264        | 0.3691516      | 0.2347650             | torch.Size([2, 512, 2])          |
| 312     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0                      | input               | torch.float32 |         | -0.2428340        | 1.2461264        | 0.3691516      | 0.2347650             | torch.Size([2, 512, 2])          |
| 312     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0                      | weight              | torch.float32 |         | -0.7023237        | 0.7394427        | 0.0490668      | 0.1972211             | torch.Size([32, 2])              |
| 312     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0                      | bias                | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1641774             | torch.Size([32])                 |
| 312     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0                      | output              | torch.float32 |         | -1.5355964        | 1.3850498        | -0.0756322     | 0.3078158             | torch.Size([2, 512, 32])         |
| 313     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1                      | input               | torch.float32 |         | 0.0000000         | 1.3850498        | 0.1910857      | 0.0770849             | torch.Size([2, 512, 32])         |
| 313     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1                      | output              | torch.float32 |         | 0.0000000         | 1.3850498        | 0.1910857      | 0.0770849             | torch.Size([2, 512, 32])         |
| 314     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 1.3850498        | 0.1910857      | 0.0770849             | torch.Size([2, 512, 32])         |
| 314     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean      | output              | torch.float32 |         | 0.1227766         | 0.2764062        | 0.1910857      | 0.0017720             | torch.Size([2, 512, 1])          |
| 315     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub                  | input_0             | torch.float32 |         | 0.0000000         | 1.3850498        | 0.1910857      | 0.0770849             | torch.Size([2, 512, 32])         |
| 315     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub                  | input_1             | torch.float32 |         | 0.1227766         | 0.2764062        | 0.1910857      | 0.0017720             | torch.Size([2, 512, 1])          |
| 315     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub                  | output              | torch.float32 |         | -0.2764062        | 1.1086435        | -0.0000000     | 0.0753146             | torch.Size([2, 512, 32])         |
| 316     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul                  | input_0             | torch.float32 |         | -0.2764062        | 1.1086435        | -0.0000000     | 0.0753146             | torch.Size([2, 512, 32])         |
| 316     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul                  | input_1             | torch.float32 |         | -0.2764062        | 1.1086435        | -0.0000000     | 0.0753146             | torch.Size([2, 512, 32])         |
| 316     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul                  | output              | torch.float32 |         | 0.0000000         | 1.2290905        | 0.0753123      | 0.0190258             | torch.Size([2, 512, 32])         |
| 317     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean        | input_0             | torch.float32 |         | 0.0000000         | 1.2290905        | 0.0753123      | 0.0190258             | torch.Size([2, 512, 32])         |
| 317     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean        | output              | torch.float32 |         | 0.0449728         | 0.1347227        | 0.0753123      | 0.0003938             | torch.Size([2, 512, 1])          |
| 318     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt                | input               | torch.float32 |         | 0.0449728         | 0.1347227        | 0.0753123      | 0.0003938             | torch.Size([2, 512, 1])          |
| 318     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt                | output              | torch.float32 |         | 2.7243540         | 4.7149482        | 3.7593472      | 0.3380035             | torch.Size([2, 512, 1])          |
| 319     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul              | input_0             | torch.float32 |         | -0.2764062        | 1.1086435        | -0.0000000     | 0.0753146             | torch.Size([2, 512, 32])         |
| 319     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul              | input_1             | torch.float32 |         | 2.7243540         | 4.7149482        | 3.7593472      | 0.3380035             | torch.Size([2, 512, 1])          |
| 319     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul              | output              | torch.float32 |         | -0.7590794        | 3.2233562        | 0.0000000      | 0.9998858             | torch.Size([2, 512, 32])         |
| 320     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant         | input               | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 320     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant         | output              | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 321     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul           | input_0             | torch.float32 |         | -0.7590794        | 3.2233562        | 0.0000000      | 0.9998858             | torch.Size([2, 512, 32])         |
| 321     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul           | input_1             | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 321     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul           | output              | torch.float32 |         | -0.8635625        | 3.1354637        | -0.0016722     | 0.9735731             | torch.Size([2, 512, 32])         |
| 322     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant           | input               | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 322     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant           | output              | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 323     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add             | input_0             | torch.float32 |         | -0.8635625        | 3.1354637        | -0.0016722     | 0.9735731             | torch.Size([2, 512, 32])         |
| 323     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add             | input_1             | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 323     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add             | output              | torch.float32 |         | -0.8082764        | 3.0474689        | 0.0268317      | 0.8883949             | torch.Size([2, 512, 32])         |
| 324     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3                      | input               | torch.float32 |         | -0.8082764        | 3.0474689        | 0.0268317      | 0.8883949             | torch.Size([2, 512, 32])         |
| 324     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3                      | weight              | torch.float32 |         | -1.0547366        | 0.5812716        | 0.0070099      | 0.0187704             | torch.Size([32, 32])             |
| 324     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3                      | bias                | torch.float32 |         | -0.2183180        | 0.1396109        | -0.0140744     | 0.0103446             | torch.Size([32])                 |
| 324     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3                      | output              | torch.float32 |         | -3.4549761        | 1.3642458        | -0.4488208     | 1.3029550             | torch.Size([2, 512, 32])         |
| 325     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4                      | input               | torch.float32 |         | 0.0000000         | 1.3642458        | 0.2489511      | 0.1095721             | torch.Size([2, 512, 32])         |
| 325     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4                      | output              | torch.float32 |         | 0.0000000         | 1.3642458        | 0.2489511      | 0.1095721             | torch.Size([2, 512, 32])         |
| 326     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 1.3642458        | 0.2489511      | 0.1095721             | torch.Size([2, 512, 32])         |
| 326     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean      | output              | torch.float32 |         | 0.2144799         | 0.3024941        | 0.2489511      | 0.0002928             | torch.Size([2, 512, 1])          |
| 327     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub                  | input_0             | torch.float32 |         | 0.0000000         | 1.3642458        | 0.2489511      | 0.1095721             | torch.Size([2, 512, 32])         |
| 327     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub                  | input_1             | torch.float32 |         | 0.2144799         | 0.3024941        | 0.2489511      | 0.0002928             | torch.Size([2, 512, 1])          |
| 327     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub                  | output              | torch.float32 |         | -0.3024941        | 1.0964792        | 0.0000000      | 0.1092796             | torch.Size([2, 512, 32])         |
| 328     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul                  | input_0             | torch.float32 |         | -0.3024941        | 1.0964792        | 0.0000000      | 0.1092796             | torch.Size([2, 512, 32])         |
| 328     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul                  | input_1             | torch.float32 |         | -0.3024941        | 1.0964792        | 0.0000000      | 0.1092796             | torch.Size([2, 512, 32])         |
| 328     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul                  | output              | torch.float32 |         | 0.0000001         | 1.2022666        | 0.1092763      | 0.0329098             | torch.Size([2, 512, 32])         |
| 329     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean        | input_0             | torch.float32 |         | 0.0000001         | 1.2022666        | 0.1092763      | 0.0329098             | torch.Size([2, 512, 32])         |
| 329     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean        | output              | torch.float32 |         | 0.0907572         | 0.1501710        | 0.1092763      | 0.0001188             | torch.Size([2, 512, 1])          |
| 330     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt                | input               | torch.float32 |         | 0.0907572         | 0.1501710        | 0.1092763      | 0.0001188             | torch.Size([2, 512, 1])          |
| 330     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt                | output              | torch.float32 |         | 2.5804329         | 3.3192155        | 3.0359299      | 0.0218091             | torch.Size([2, 512, 1])          |
| 331     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul              | input_0             | torch.float32 |         | -0.3024941        | 1.0964792        | 0.0000000      | 0.1092796             | torch.Size([2, 512, 32])         |
| 331     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul              | input_1             | torch.float32 |         | 2.5804329         | 3.3192155        | 3.0359299      | 0.0218091             | torch.Size([2, 512, 1])          |
| 331     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul              | output              | torch.float32 |         | -0.8447080        | 3.1269784        | 0.0000000      | 0.9999381             | torch.Size([2, 512, 32])         |
| 332     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant         | input               | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 332     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant         | output              | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 333     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul           | input_0             | torch.float32 |         | -0.8447080        | 3.1269784        | 0.0000000      | 0.9999381             | torch.Size([2, 512, 32])         |
| 333     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul           | input_1             | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 333     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul           | output              | torch.float32 |         | -0.9187081        | 3.1492577        | 0.0044772      | 0.9796643             | torch.Size([2, 512, 32])         |
| 334     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant           | input               | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 334     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant           | output              | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 335     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add             | input_0             | torch.float32 |         | -0.9187081        | 3.1492577        | 0.0044772      | 0.9796643             | torch.Size([2, 512, 32])         |
| 335     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add             | input_1             | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 335     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add             | output              | torch.float32 |         | -0.8533856        | 3.1084414        | 0.0287215      | 0.9214084             | torch.Size([2, 512, 32])         |
| 336     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6                      | input               | torch.float32 |         | -0.8533856        | 3.1084414        | 0.0287215      | 0.9214084             | torch.Size([2, 512, 32])         |
| 336     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6                      | weight              | torch.float32 |         | -0.4480607        | 0.3678726        | 0.0004879      | 0.0160908             | torch.Size([32, 32])             |
| 336     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6                      | bias                | torch.float32 |         | -0.1861591        | 0.1739754        | 0.0155446      | 0.0137690             | torch.Size([32])                 |
| 336     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6                      | output              | torch.float32 |         | -3.6199901        | 2.3434565        | -0.0827050     | 1.2385832             | torch.Size([2, 512, 32])         |
| 337     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7                      | input               | torch.float32 |         | 0.0000000         | 2.3434565        | 0.4124202      | 0.2605692             | torch.Size([2, 512, 32])         |
| 337     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7                      | output              | torch.float32 |         | 0.0000000         | 2.3434565        | 0.4124202      | 0.2605692             | torch.Size([2, 512, 32])         |
| 338     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 2.3434565        | 0.4124202      | 0.2605692             | torch.Size([2, 512, 32])         |
| 338     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean      | output              | torch.float32 |         | 0.3374409         | 0.5175337        | 0.4124202      | 0.0022484             | torch.Size([2, 512, 1])          |
| 339     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub                  | input_0             | torch.float32 |         | 0.0000000         | 2.3434565        | 0.4124202      | 0.2605692             | torch.Size([2, 512, 32])         |
| 339     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub                  | input_1             | torch.float32 |         | 0.3374409         | 0.5175337        | 0.4124202      | 0.0022484             | torch.Size([2, 512, 1])          |
| 339     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub                  | output              | torch.float32 |         | -0.5175337        | 1.8259227        | -0.0000000     | 0.2583229             | torch.Size([2, 512, 32])         |
| 340     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul                  | input_0             | torch.float32 |         | -0.5175337        | 1.8259227        | -0.0000000     | 0.2583229             | torch.Size([2, 512, 32])         |
| 340     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul                  | input_1             | torch.float32 |         | -0.5175337        | 1.8259227        | -0.0000000     | 0.2583229             | torch.Size([2, 512, 32])         |
| 340     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul                  | output              | torch.float32 |         | 0.0000000         | 3.3339939        | 0.2583150      | 0.1053889             | torch.Size([2, 512, 32])         |
| 341     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean        | input_0             | torch.float32 |         | 0.0000000         | 3.3339939        | 0.2583150      | 0.1053889             | torch.Size([2, 512, 32])         |
| 341     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean        | output              | torch.float32 |         | 0.1833911         | 0.4170344        | 0.2583150      | 0.0026001             | torch.Size([2, 512, 1])          |
| 342     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt                | input               | torch.float32 |         | 0.1833911         | 0.4170344        | 0.2583150      | 0.0026001             | torch.Size([2, 512, 1])          |
| 342     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt                | output              | torch.float32 |         | 1.5484915         | 2.3350654        | 1.9984670      | 0.0441990             | torch.Size([2, 512, 1])          |
| 343     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul              | input_0             | torch.float32 |         | -0.5175337        | 1.8259227        | -0.0000000     | 0.2583229             | torch.Size([2, 512, 32])         |
| 343     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul              | input_1             | torch.float32 |         | 1.5484915         | 2.3350654        | 1.9984670      | 0.0441990             | torch.Size([2, 512, 1])          |
| 343     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul              | output              | torch.float32 |         | -0.8642094        | 2.8602281        | 0.0000000      | 0.9999900             | torch.Size([2, 512, 32])         |
| 344     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant         | input               | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 344     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant         | output              | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 345     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul           | input_0             | torch.float32 |         | -0.8642094        | 2.8602281        | 0.0000000      | 0.9999900             | torch.Size([2, 512, 32])         |
| 345     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul           | input_1             | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 345     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul           | output              | torch.float32 |         | -0.9430081        | 2.8208735        | 0.0063461      | 0.9886682             | torch.Size([2, 512, 32])         |
| 346     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant           | input               | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 346     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant           | output              | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 347     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add             | input_0             | torch.float32 |         | -0.9430081        | 2.8208735        | 0.0063461      | 0.9886682             | torch.Size([2, 512, 32])         |
| 347     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add             | input_1             | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 347     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add             | output              | torch.float32 |         | -0.9104147        | 2.7926209        | 0.0135158      | 0.9497185             | torch.Size([2, 512, 32])         |
| 348     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9                      | input               | torch.float32 |         | -0.9104147        | 2.7926209        | 0.0135158      | 0.9497185             | torch.Size([2, 512, 32])         |
| 348     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9                      | weight              | torch.float32 |         | -0.5597425        | 0.7001730        | 0.0015679      | 0.0160348             | torch.Size([32, 32])             |
| 348     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9                      | bias                | torch.float32 |         | -0.1810580        | 0.1736723        | -0.0279047     | 0.0091159             | torch.Size([32])                 |
| 348     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9                      | output              | torch.float32 |         | -3.9054508        | 3.4945192        | -0.1439822     | 1.2469910             | torch.Size([2, 512, 32])         |
| 349     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10                     | input               | torch.float32 |         | 0.0000000         | 3.4945192        | 0.3051969      | 0.4035255             | torch.Size([2, 512, 32])         |
| 349     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10                     | output              | torch.float32 |         | 0.0000000         | 3.4945192        | 0.3051969      | 0.4035255             | torch.Size([2, 512, 32])         |
| 350     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 3.4945192        | 0.3051969      | 0.4035255             | torch.Size([2, 512, 32])         |
| 350     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean     | output              | torch.float32 |         | 0.2816672         | 0.3658091        | 0.3051969      | 0.0005328             | torch.Size([2, 512, 1])          |
| 351     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub                 | input_0             | torch.float32 |         | 0.0000000         | 3.4945192        | 0.3051969      | 0.4035255             | torch.Size([2, 512, 32])         |
| 351     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub                 | input_1             | torch.float32 |         | 0.2816672         | 0.3658091        | 0.3051969      | 0.0005328             | torch.Size([2, 512, 1])          |
| 351     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub                 | output              | torch.float32 |         | -0.3658091        | 3.2003901        | 0.0000000      | 0.4029932             | torch.Size([2, 512, 32])         |
| 352     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul                 | input_0             | torch.float32 |         | -0.3658091        | 3.2003901        | 0.0000000      | 0.4029932             | torch.Size([2, 512, 32])         |
| 352     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul                 | input_1             | torch.float32 |         | -0.3658091        | 3.2003901        | 0.0000000      | 0.4029932             | torch.Size([2, 512, 32])         |
| 352     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul                 | output              | torch.float32 |         | 0.0000000         | 10.2424965       | 0.4029809      | 1.9142085             | torch.Size([2, 512, 32])         |
| 353     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 10.2424965       | 0.4029809      | 1.9142085             | torch.Size([2, 512, 32])         |
| 353     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean       | output              | torch.float32 |         | 0.2786766         | 0.4479688        | 0.4029809      | 0.0012556             | torch.Size([2, 512, 1])          |
| 354     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt               | input               | torch.float32 |         | 0.2786766         | 0.4479688        | 0.4029809      | 0.0012556             | torch.Size([2, 512, 1])          |
| 354     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt               | output              | torch.float32 |         | 1.4940711         | 1.8942703        | 1.5804031      | 0.0059942             | torch.Size([2, 512, 1])          |
| 355     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul             | input_0             | torch.float32 |         | -0.3658091        | 3.2003901        | 0.0000000      | 0.4029932             | torch.Size([2, 512, 32])         |
| 355     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul             | input_1             | torch.float32 |         | 1.4940711         | 1.8942703        | 1.5804031      | 0.0059942             | torch.Size([2, 512, 1])          |
| 355     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul             | output              | torch.float32 |         | -0.6735247        | 4.8591447        | 0.0000000      | 1.0000055             | torch.Size([2, 512, 32])         |
| 356     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant        | input               | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 356     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant        | output              | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 357     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul          | input_0             | torch.float32 |         | -0.6735247        | 4.8591447        | 0.0000000      | 1.0000055             | torch.Size([2, 512, 32])         |
| 357     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul          | input_1             | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 357     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul          | output              | torch.float32 |         | -0.9892963        | 4.8895106        | -0.0317544     | 0.9916357             | torch.Size([2, 512, 32])         |
| 358     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant          | input               | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 358     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant          | output              | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 359     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add            | input_0             | torch.float32 |         | -0.9892963        | 4.8895106        | -0.0317544     | 0.9916357             | torch.Size([2, 512, 32])         |
| 359     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add            | input_1             | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 359     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add            | output              | torch.float32 |         | -0.8153484        | 4.8323817        | 0.0486247      | 0.9006547             | torch.Size([2, 512, 32])         |
| 360     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -52.9582825       | 52.8438606       | 0.4784662      | 77.4394913            | torch.Size([2, 512, 11])         |
| 360     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -0.2876937        | 0.2875521        | 0.0021415      | 0.0049365             | torch.Size([2, 512, 3])          |
| 361     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0                      | input               | torch.float32 |         | -0.2876937        | 0.2875521        | 0.0021415      | 0.0049365             | torch.Size([2, 512, 3])          |
| 361     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0                      | weight              | torch.float32 |         | -1.0475703        | 0.9848034        | -0.0054673     | 0.2080412             | torch.Size([64, 3])              |
| 361     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0                      | bias                | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1294928             | torch.Size([64])                 |
| 361     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0                      | output              | torch.float32 |         | -0.9049715        | 0.7016723        | -0.0510570     | 0.1305059             | torch.Size([2, 512, 64])         |
| 362     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1                      | input               | torch.float32 |         | 0.0000000         | 0.7016723        | 0.1294098      | 0.0288766             | torch.Size([2, 512, 64])         |
| 362     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1                      | output              | torch.float32 |         | 0.0000000         | 0.7016723        | 0.1294098      | 0.0288766             | torch.Size([2, 512, 64])         |
| 363     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 0.7016723        | 0.1294098      | 0.0288766             | torch.Size([2, 512, 64])         |
| 363     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean      | output              | torch.float32 |         | 0.1221597         | 0.1425653        | 0.1294098      | 0.0000097             | torch.Size([2, 512, 1])          |
| 364     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub                  | input_0             | torch.float32 |         | 0.0000000         | 0.7016723        | 0.1294098      | 0.0288766             | torch.Size([2, 512, 64])         |
| 364     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub                  | input_1             | torch.float32 |         | 0.1221597         | 0.1425653        | 0.1294098      | 0.0000097             | torch.Size([2, 512, 1])          |
| 364     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub                  | output              | torch.float32 |         | -0.1425653        | 0.5592573        | 0.0000000      | 0.0288669             | torch.Size([2, 512, 64])         |
| 365     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul                  | input_0             | torch.float32 |         | -0.1425653        | 0.5592573        | 0.0000000      | 0.0288669             | torch.Size([2, 512, 64])         |
| 365     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul                  | input_1             | torch.float32 |         | -0.1425653        | 0.5592573        | 0.0000000      | 0.0288669             | torch.Size([2, 512, 64])         |
| 365     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul                  | output              | torch.float32 |         | 0.0000000         | 0.3127687        | 0.0288664      | 0.0015711             | torch.Size([2, 512, 64])         |
| 366     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean        | input_0             | torch.float32 |         | 0.0000000         | 0.3127687        | 0.0288664      | 0.0015711             | torch.Size([2, 512, 64])         |
| 366     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean        | output              | torch.float32 |         | 0.0269029         | 0.0396912        | 0.0288664      | 0.0000054             | torch.Size([2, 512, 1])          |
| 367     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt                | input               | torch.float32 |         | 0.0269029         | 0.0396912        | 0.0288664      | 0.0000054             | torch.Size([2, 512, 1])          |
| 367     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt                | output              | torch.float32 |         | 5.0187836         | 6.0956445        | 5.8975887      | 0.0464929             | torch.Size([2, 512, 1])          |
| 368     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul              | input_0             | torch.float32 |         | -0.1425653        | 0.5592573        | 0.0000000      | 0.0288669             | torch.Size([2, 512, 64])         |
| 368     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul              | input_1             | torch.float32 |         | 5.0187836         | 6.0956445        | 5.8975887      | 0.0464929             | torch.Size([2, 512, 1])          |
| 368     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul              | output              | torch.float32 |         | -0.8156720        | 2.9662776        | 0.0000000      | 0.9996671             | torch.Size([2, 512, 64])         |
| 369     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant         | input               | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 369     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant         | output              | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 370     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul           | input_0             | torch.float32 |         | -0.8156720        | 2.9662776        | 0.0000000      | 0.9996671             | torch.Size([2, 512, 64])         |
| 370     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul           | input_1             | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 370     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul           | output              | torch.float32 |         | -0.8911289        | 2.9611311        | 0.0105272      | 0.9523847             | torch.Size([2, 512, 64])         |
| 371     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant           | input               | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 371     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant           | output              | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 372     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add             | input_0             | torch.float32 |         | -0.8911289        | 2.9611311        | 0.0105272      | 0.9523847             | torch.Size([2, 512, 64])         |
| 372     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add             | input_1             | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 372     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add             | output              | torch.float32 |         | -0.8842805        | 2.9108393        | 0.0409812      | 0.8473449             | torch.Size([2, 512, 64])         |
| 373     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3                      | input               | torch.float32 |         | -0.8842805        | 2.9108393        | 0.0409812      | 0.8473449             | torch.Size([2, 512, 64])         |
| 373     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3                      | weight              | torch.float32 |         | -0.4523612        | 0.4813256        | -0.0014562     | 0.0096743             | torch.Size([64, 64])             |
| 373     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3                      | bias                | torch.float32 |         | -0.1183558        | 0.2243176        | 0.0150283      | 0.0049289             | torch.Size([64])                 |
| 373     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3                      | output              | torch.float32 |         | -5.3853030        | 2.6172597        | -0.4548657     | 2.8959248             | torch.Size([2, 512, 64])         |
| 374     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4                      | input               | torch.float32 |         | 0.0000000         | 2.6172597        | 0.4000499      | 0.2915202             | torch.Size([2, 512, 64])         |
| 374     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4                      | output              | torch.float32 |         | 0.0000000         | 2.6172597        | 0.4000499      | 0.2915202             | torch.Size([2, 512, 64])         |
| 375     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 2.6172597        | 0.4000499      | 0.2915202             | torch.Size([2, 512, 64])         |
| 375     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean      | output              | torch.float32 |         | 0.3459151         | 0.4784797        | 0.4000499      | 0.0007547             | torch.Size([2, 512, 1])          |
| 376     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub                  | input_0             | torch.float32 |         | 0.0000000         | 2.6172597        | 0.4000499      | 0.2915202             | torch.Size([2, 512, 64])         |
| 376     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub                  | input_1             | torch.float32 |         | 0.3459151         | 0.4784797        | 0.4000499      | 0.0007547             | torch.Size([2, 512, 1])          |
| 376     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub                  | output              | torch.float32 |         | -0.4784797        | 2.1391723        | -0.0000000     | 0.2907662             | torch.Size([2, 512, 64])         |
| 377     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul                  | input_0             | torch.float32 |         | -0.4784797        | 2.1391723        | -0.0000000     | 0.2907662             | torch.Size([2, 512, 64])         |
| 377     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul                  | input_1             | torch.float32 |         | -0.4784797        | 2.1391723        | -0.0000000     | 0.2907662             | torch.Size([2, 512, 64])         |
| 377     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul                  | output              | torch.float32 |         | 0.0000000         | 4.5760584        | 0.2907617      | 0.3028654             | torch.Size([2, 512, 64])         |
| 378     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean        | input_0             | torch.float32 |         | 0.0000000         | 4.5760584        | 0.2907617      | 0.3028654             | torch.Size([2, 512, 64])         |
| 378     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean        | output              | torch.float32 |         | 0.2481127         | 0.4772718        | 0.2907617      | 0.0020953             | torch.Size([2, 512, 1])          |
| 379     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt                | input               | torch.float32 |         | 0.2481127         | 0.4772718        | 0.2907617      | 0.0020953             | torch.Size([2, 512, 1])          |
| 379     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt                | output              | torch.float32 |         | 1.4474800         | 2.0075519        | 1.8701646      | 0.0181640             | torch.Size([2, 512, 1])          |
| 380     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul              | input_0             | torch.float32 |         | -0.4784797        | 2.1391723        | -0.0000000     | 0.2907662             | torch.Size([2, 512, 64])         |
| 380     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul              | input_1             | torch.float32 |         | 1.4474800         | 2.0075519        | 1.8701646      | 0.0181640             | torch.Size([2, 512, 1])          |
| 380     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul              | output              | torch.float32 |         | -0.8424010        | 3.7576196        | -0.0000000     | 0.9999800             | torch.Size([2, 512, 64])         |
| 381     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant         | input               | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 381     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant         | output              | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 382     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul           | input_0             | torch.float32 |         | -0.8424010        | 3.7576196        | -0.0000000     | 0.9999800             | torch.Size([2, 512, 64])         |
| 382     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul           | input_1             | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 382     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul           | output              | torch.float32 |         | -0.9011380        | 4.1002803        | 0.0076117      | 1.0159723             | torch.Size([2, 512, 64])         |
| 383     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant           | input               | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 383     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant           | output              | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 384     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add             | input_0             | torch.float32 |         | -0.9011380        | 4.1002803        | 0.0076117      | 1.0159723             | torch.Size([2, 512, 64])         |
| 384     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add             | input_1             | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 384     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add             | output              | torch.float32 |         | -0.8931798        | 4.0833645        | 0.0241060      | 0.9662086             | torch.Size([2, 512, 64])         |
| 385     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6                      | input               | torch.float32 |         | -0.8931798        | 4.0833645        | 0.0241060      | 0.9662086             | torch.Size([2, 512, 64])         |
| 385     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6                      | weight              | torch.float32 |         | -0.5707353        | 0.3620123        | -0.0010372     | 0.0088292             | torch.Size([64, 64])             |
| 385     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6                      | bias                | torch.float32 |         | -0.1720246        | 0.1340137        | -0.0235144     | 0.0050507             | torch.Size([64])                 |
| 385     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6                      | output              | torch.float32 |         | -5.4152522        | 3.7293291        | -0.3618882     | 2.5204153             | torch.Size([2, 512, 64])         |
| 386     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7                      | input               | torch.float32 |         | 0.0000000         | 3.7293291        | 0.4771840      | 0.6274447             | torch.Size([2, 512, 64])         |
| 386     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7                      | output              | torch.float32 |         | 0.0000000         | 3.7293291        | 0.4771840      | 0.6274447             | torch.Size([2, 512, 64])         |
| 387     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean      | input_0             | torch.float32 |         | 0.0000000         | 3.7293291        | 0.4771840      | 0.6274447             | torch.Size([2, 512, 64])         |
| 387     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean      | output              | torch.float32 |         | 0.3895661         | 0.5116951        | 0.4771840      | 0.0005931             | torch.Size([2, 512, 1])          |
| 388     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub                  | input_0             | torch.float32 |         | 0.0000000         | 3.7293291        | 0.4771840      | 0.6274447             | torch.Size([2, 512, 64])         |
| 388     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub                  | input_1             | torch.float32 |         | 0.3895661         | 0.5116951        | 0.4771840      | 0.0005931             | torch.Size([2, 512, 1])          |
| 388     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub                  | output              | torch.float32 |         | -0.5116951        | 3.2339525        | -0.0000000     | 0.6268522             | torch.Size([2, 512, 64])         |
| 389     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul                  | input_0             | torch.float32 |         | -0.5116951        | 3.2339525        | -0.0000000     | 0.6268522             | torch.Size([2, 512, 64])         |
| 389     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul                  | input_1             | torch.float32 |         | -0.5116951        | 3.2339525        | -0.0000000     | 0.6268522             | torch.Size([2, 512, 64])         |
| 389     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul                  | output              | torch.float32 |         | 0.0000000         | 10.4584494       | 0.6268426      | 1.6686329             | torch.Size([2, 512, 64])         |
| 390     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean        | input_0             | torch.float32 |         | 0.0000000         | 10.4584494       | 0.6268426      | 1.6686329             | torch.Size([2, 512, 64])         |
| 390     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean        | output              | torch.float32 |         | 0.4068461         | 0.7665273        | 0.6268427      | 0.0020664             | torch.Size([2, 512, 1])          |
| 391     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt                | input               | torch.float32 |         | 0.4068461         | 0.7665273        | 0.6268427      | 0.0020664             | torch.Size([2, 512, 1])          |
| 391     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt                | output              | torch.float32 |         | 1.1421769         | 1.5677600        | 1.2657079      | 0.0024121             | torch.Size([2, 512, 1])          |
| 392     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul              | input_0             | torch.float32 |         | -0.5116951        | 3.2339525        | -0.0000000     | 0.6268522             | torch.Size([2, 512, 64])         |
| 392     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul              | input_1             | torch.float32 |         | 1.1421769         | 1.5677600        | 1.2657079      | 0.0024121             | torch.Size([2, 512, 1])          |
| 392     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul              | output              | torch.float32 |         | -0.6736512        | 3.9972827        | -0.0000000     | 0.9999992             | torch.Size([2, 512, 64])         |
| 393     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant         | input               | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 393     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant         | output              | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 394     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul           | input_0             | torch.float32 |         | -0.6736512        | 3.9972827        | -0.0000000     | 0.9999992             | torch.Size([2, 512, 64])         |
| 394     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul           | input_1             | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 394     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul           | output              | torch.float32 |         | -0.7743863        | 4.1330662        | 0.0094477      | 1.0099080             | torch.Size([2, 512, 64])         |
| 395     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant           | input               | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 395     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant           | output              | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 396     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add             | input_0             | torch.float32 |         | -0.7743863        | 4.1330662        | 0.0094477      | 1.0099080             | torch.Size([2, 512, 64])         |
| 396     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add             | input_1             | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 396     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add             | output              | torch.float32 |         | -0.7538345        | 4.1186957        | 0.0227305      | 0.9915734             | torch.Size([2, 512, 64])         |
| 397     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9                      | input               | torch.float32 |         | -0.7538345        | 4.1186957        | 0.0227305      | 0.9915734             | torch.Size([2, 512, 64])         |
| 397     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9                      | weight              | torch.float32 |         | -0.5701389        | 0.3477888        | 0.0006721      | 0.0085883             | torch.Size([64, 64])             |
| 397     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9                      | bias                | torch.float32 |         | -0.1677032        | 0.1709885        | -0.0237130     | 0.0070098             | torch.Size([64])                 |
| 397     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9                      | output              | torch.float32 |         | -4.4999785        | 7.1846175        | -0.5453271     | 1.9584367             | torch.Size([2, 512, 64])         |
| 398     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10                     | input               | torch.float32 |         | 0.0000000         | 7.1846175        | 0.2606024      | 0.7096657             | torch.Size([2, 512, 64])         |
| 398     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10                     | output              | torch.float32 |         | 0.0000000         | 7.1846175        | 0.2606024      | 0.7096657             | torch.Size([2, 512, 64])         |
| 399     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 7.1846175        | 0.2606024      | 0.7096657             | torch.Size([2, 512, 64])         |
| 399     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean     | output              | torch.float32 |         | 0.2098667         | 0.3422839        | 0.2606024      | 0.0016773             | torch.Size([2, 512, 1])          |
| 400     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub                 | input_0             | torch.float32 |         | 0.0000000         | 7.1846175        | 0.2606024      | 0.7096657             | torch.Size([2, 512, 64])         |
| 400     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub                 | input_1             | torch.float32 |         | 0.2098667         | 0.3422839        | 0.2606024      | 0.0016773             | torch.Size([2, 512, 1])          |
| 400     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub                 | output              | torch.float32 |         | -0.3422839        | 6.9700212        | 0.0000000      | 0.7079899             | torch.Size([2, 512, 64])         |
| 401     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul                 | input_0             | torch.float32 |         | -0.3422839        | 6.9700212        | 0.0000000      | 0.7079899             | torch.Size([2, 512, 64])         |
| 401     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul                 | input_1             | torch.float32 |         | -0.3422839        | 6.9700212        | 0.0000000      | 0.7079899             | torch.Size([2, 512, 64])         |
| 401     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul                 | output              | torch.float32 |         | 0.0000000         | 48.5811958       | 0.7079791      | 21.7357063            | torch.Size([2, 512, 64])         |
| 402     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 48.5811958       | 0.7079791      | 21.7357063            | torch.Size([2, 512, 64])         |
| 402     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean       | output              | torch.float32 |         | 0.3556053         | 0.8262171        | 0.7079792      | 0.0119517             | torch.Size([2, 512, 1])          |
| 403     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt               | input               | torch.float32 |         | 0.3556053         | 0.8262171        | 0.7079792      | 0.0119517             | torch.Size([2, 512, 1])          |
| 403     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt               | output              | torch.float32 |         | 1.1001459         | 1.6769100        | 1.2009852      | 0.0115013             | torch.Size([2, 512, 1])          |
| 404     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul             | input_0             | torch.float32 |         | -0.3422839        | 6.9700212        | 0.0000000      | 0.7079899             | torch.Size([2, 512, 64])         |
| 404     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul             | input_1             | torch.float32 |         | 1.1001459         | 1.6769100        | 1.2009852      | 0.0115013             | torch.Size([2, 512, 1])          |
| 404     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul             | output              | torch.float32 |         | -0.5375260        | 7.6972232        | 0.0000000      | 1.0000007             | torch.Size([2, 512, 64])         |
| 405     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant        | input               | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 405     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant        | output              | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 406     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul          | input_0             | torch.float32 |         | -0.5375260        | 7.6972232        | 0.0000000      | 1.0000007             | torch.Size([2, 512, 64])         |
| 406     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul          | input_1             | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 406     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul          | output              | torch.float32 |         | -0.6870567        | 5.6167893        | -0.0365614     | 0.6532915             | torch.Size([2, 512, 64])         |
| 407     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant          | input               | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 407     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant          | output              | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 408     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add            | input_0             | torch.float32 |         | -0.6870567        | 5.6167893        | -0.0365614     | 0.6532915             | torch.Size([2, 512, 64])         |
| 408     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add            | input_1             | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 408     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add            | output              | torch.float32 |         | -0.6728376        | 5.3782487        | 0.0534439      | 0.5705429             | torch.Size([2, 512, 64])         |
| 409     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_0             | torch.float32 |         | -0.8540859        | 7.9945936        | 0.0735555      | 0.8713833             | torch.Size([2, 512, 128])        |
| 409     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_1             | torch.float32 |         | -1.2276719        | 4.6139283        | -0.0171071     | 1.2521633             | torch.Size([2, 512, 32])         |
| 409     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_2             | torch.float32 |         | -0.8153484        | 4.8323817        | 0.0486247      | 0.9006547             | torch.Size([2, 512, 32])         |
| 409     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | input_3             | torch.float32 |         | -0.6728376        | 5.3782487        | 0.0534439      | 0.5705429             | torch.Size([2, 512, 64])         |
| 409     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat                           | output              | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0540784      | 0.8482460             | torch.Size([2, 512, 256])        |
| 410     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 11])         |
| 410     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 3])          |
| 411     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(1)                   | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 3])          |
| 411     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(1)                   | weight              | torch.float32 |         | -0.9216561        | 0.9167990        | -0.0046354     | 0.1373587             | torch.Size([128, 3])             |
| 411     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(1)                   | bias                | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3650480             | torch.Size([128])                |
| 411     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(1)                   | output              | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3622016             | torch.Size([2, 256, 128])        |
| 412     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(1)                   | input               | torch.float32 |         | 0.0000000         | 1.0183468        | 0.2505171      | 0.1060748             | torch.Size([2, 256, 128])        |
| 412     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(1)                   | output              | torch.float32 |         | 0.0000000         | 1.0183468        | 0.2505171      | 0.1060748             | torch.Size([2, 256, 128])        |
| 413     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 1.0183468        | 0.2505171      | 0.1060748             | torch.Size([2, 256, 128])        |
| 413     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(1)   | output              | torch.float32 |         | 0.2505170         | 0.2505170        | 0.2505170      | 0.0000000             | torch.Size([2, 256, 1])          |
| 414     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 1.0183468        | 0.2505171      | 0.1060748             | torch.Size([2, 256, 128])        |
| 414     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(1)               | input_1             | torch.float32 |         | 0.2505170         | 0.2505170        | 0.2505170      | 0.0000000             | torch.Size([2, 256, 1])          |
| 414     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(1)               | output              | torch.float32 |         | -0.2505170        | 0.7678298        | 0.0000000      | 0.1060748             | torch.Size([2, 256, 128])        |
| 415     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(1)               | input_0             | torch.float32 |         | -0.2505170        | 0.7678298        | 0.0000000      | 0.1060748             | torch.Size([2, 256, 128])        |
| 415     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(1)               | input_1             | torch.float32 |         | -0.2505170        | 0.7678298        | 0.0000000      | 0.1060748             | torch.Size([2, 256, 128])        |
| 415     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(1)               | output              | torch.float32 |         | 0.0000020         | 0.5895625        | 0.1060732      | 0.0147966             | torch.Size([2, 256, 128])        |
| 416     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0000020         | 0.5895625        | 0.1060732      | 0.0147966             | torch.Size([2, 256, 128])        |
| 416     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(1)     | output              | torch.float32 |         | 0.1060732         | 0.1060732        | 0.1060732      | 0.0000000             | torch.Size([2, 256, 1])          |
| 417     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(1)             | input               | torch.float32 |         | 0.1060732         | 0.1060732        | 0.1060732      | 0.0000000             | torch.Size([2, 256, 1])          |
| 417     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(1)             | output              | torch.float32 |         | 3.0702708         | 3.0702708        | 3.0702708      | 0.0000000             | torch.Size([2, 256, 1])          |
| 418     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(1)           | input_0             | torch.float32 |         | -0.2505170        | 0.7678298        | 0.0000000      | 0.1060748             | torch.Size([2, 256, 128])        |
| 418     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(1)           | input_1             | torch.float32 |         | 3.0702708         | 3.0702708        | 3.0702708      | 0.0000000             | torch.Size([2, 256, 1])          |
| 418     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(1)           | output              | torch.float32 |         | -0.7691551        | 2.3574452        | -0.0000001     | 0.9999209             | torch.Size([2, 256, 128])        |
| 419     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(1)      | input               | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 419     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(1)      | output              | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 420     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(1)        | input_0             | torch.float32 |         | -0.7691551        | 2.3574452        | -0.0000001     | 0.9999209             | torch.Size([2, 256, 128])        |
| 420     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(1)        | input_1             | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 420     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(1)        | output              | torch.float32 |         | -0.8713731        | 3.1028106        | 0.0298243      | 1.0396560             | torch.Size([2, 256, 128])        |
| 421     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(1)        | input               | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 421     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(1)        | output              | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 422     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(1)          | input_0             | torch.float32 |         | -0.8713731        | 3.1028106        | 0.0298243      | 1.0396560             | torch.Size([2, 256, 128])        |
| 422     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(1)          | input_1             | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 422     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(1)          | output              | torch.float32 |         | -0.8489928        | 3.0990517        | 0.0386447      | 1.0224179             | torch.Size([2, 256, 128])        |
| 423     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(1)                   | input               | torch.float32 |         | -0.8489928        | 3.0990517        | 0.0386447      | 1.0224179             | torch.Size([2, 256, 128])        |
| 423     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(1)                   | weight              | torch.float32 |         | -0.3750711        | 0.3968706        | 0.0019093      | 0.0048458             | torch.Size([128, 128])           |
| 423     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(1)                   | bias                | torch.float32 |         | -0.1863807        | 0.1385574        | -0.0156467     | 0.0047256             | torch.Size([128])                |
| 423     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(1)                   | output              | torch.float32 |         | -6.8171029        | 8.4221582        | -0.0052252     | 9.9828663             | torch.Size([2, 256, 128])        |
| 424     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(1)                   | input               | torch.float32 |         | 0.0000000         | 8.4221582        | 1.3435297      | 3.2989211             | torch.Size([2, 256, 128])        |
| 424     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(1)                   | output              | torch.float32 |         | 0.0000000         | 8.4221582        | 1.3435297      | 3.2989211             | torch.Size([2, 256, 128])        |
| 425     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 8.4221582        | 1.3435297      | 3.2989211             | torch.Size([2, 256, 128])        |
| 425     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(1)   | output              | torch.float32 |         | 1.3435298         | 1.3435298        | 1.3435298      | 0.0000000             | torch.Size([2, 256, 1])          |
| 426     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 8.4221582        | 1.3435297      | 3.2989211             | torch.Size([2, 256, 128])        |
| 426     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(1)               | input_1             | torch.float32 |         | 1.3435298         | 1.3435298        | 1.3435298      | 0.0000000             | torch.Size([2, 256, 1])          |
| 426     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(1)               | output              | torch.float32 |         | -1.3435298        | 7.0786285        | 0.0000000      | 3.2989213             | torch.Size([2, 256, 128])        |
| 427     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(1)               | input_0             | torch.float32 |         | -1.3435298        | 7.0786285        | 0.0000000      | 3.2989213             | torch.Size([2, 256, 128])        |
| 427     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(1)               | input_1             | torch.float32 |         | -1.3435298        | 7.0786285        | 0.0000000      | 3.2989213             | torch.Size([2, 256, 128])        |
| 427     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(1)               | output              | torch.float32 |         | 0.0004190         | 50.1069832       | 3.2988713      | 34.6244049            | torch.Size([2, 256, 128])        |
| 428     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0004190         | 50.1069832       | 3.2988713      | 34.6244049            | torch.Size([2, 256, 128])        |
| 428     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(1)     | output              | torch.float32 |         | 3.2988708         | 3.2988708        | 3.2988708      | 0.0000000             | torch.Size([2, 256, 1])          |
| 429     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(1)             | input               | torch.float32 |         | 3.2988708         | 3.2988708        | 3.2988708      | 0.0000000             | torch.Size([2, 256, 1])          |
| 429     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(1)             | output              | torch.float32 |         | 0.5505753         | 0.5505753        | 0.5505753      | 0.0000000             | torch.Size([2, 256, 1])          |
| 430     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(1)           | input_0             | torch.float32 |         | -1.3435298        | 7.0786285        | 0.0000000      | 3.2989213             | torch.Size([2, 256, 128])        |
| 430     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(1)           | input_1             | torch.float32 |         | 0.5505753         | 0.5505753        | 0.5505753      | 0.0000000             | torch.Size([2, 256, 1])          |
| 430     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(1)           | output              | torch.float32 |         | -0.7397143        | 3.8973176        | 0.0000000      | 1.0000123             | torch.Size([2, 256, 128])        |
| 431     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(1)      | input               | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 431     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(1)      | output              | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 432     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(1)        | input_0             | torch.float32 |         | -0.7397143        | 3.8973176        | 0.0000000      | 1.0000123             | torch.Size([2, 256, 128])        |
| 432     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(1)        | input_1             | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 432     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(1)        | output              | torch.float32 |         | -0.8654347        | 5.7393084        | 0.0583878      | 1.1505334             | torch.Size([2, 256, 128])        |
| 433     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(1)        | input               | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 433     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(1)        | output              | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 434     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(1)          | input_0             | torch.float32 |         | -0.8654347        | 5.7393084        | 0.0583878      | 1.1505334             | torch.Size([2, 256, 128])        |
| 434     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(1)          | input_1             | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 434     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(1)          | output              | torch.float32 |         | -0.8651282        | 5.7057357        | 0.0901901      | 1.1173519             | torch.Size([2, 256, 128])        |
| 435     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(1)                   | input               | torch.float32 |         | -0.8651282        | 5.7057357        | 0.0901901      | 1.1173519             | torch.Size([2, 256, 128])        |
| 435     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(1)                   | weight              | torch.float32 |         | -0.7504157        | 0.4182976        | -0.0024651     | 0.0052447             | torch.Size([128, 128])           |
| 435     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(1)                   | bias                | torch.float32 |         | -0.1397866        | 0.1210779        | 0.0064616      | 0.0040949             | torch.Size([128])                |
| 435     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(1)                   | output              | torch.float32 |         | -10.2651653       | 7.1977773        | -0.0046227     | 13.4511890            | torch.Size([2, 256, 128])        |
| 436     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(1)                   | input               | torch.float32 |         | 0.0000000         | 7.1977773        | 1.4766947      | 3.2165136             | torch.Size([2, 256, 128])        |
| 436     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(1)                   | output              | torch.float32 |         | 0.0000000         | 7.1977773        | 1.4766947      | 3.2165136             | torch.Size([2, 256, 128])        |
| 437     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 7.1977773        | 1.4766947      | 3.2165136             | torch.Size([2, 256, 128])        |
| 437     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(1)   | output              | torch.float32 |         | 1.4766946         | 1.4766946        | 1.4766946      | 0.0000000             | torch.Size([2, 256, 1])          |
| 438     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 7.1977773        | 1.4766947      | 3.2165136             | torch.Size([2, 256, 128])        |
| 438     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(1)               | input_1             | torch.float32 |         | 1.4766946         | 1.4766946        | 1.4766946      | 0.0000000             | torch.Size([2, 256, 1])          |
| 438     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(1)               | output              | torch.float32 |         | -1.4766946        | 5.7210827        | -0.0000002     | 3.2165136             | torch.Size([2, 256, 128])        |
| 439     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(1)               | input_0             | torch.float32 |         | -1.4766946        | 5.7210827        | -0.0000002     | 3.2165136             | torch.Size([2, 256, 128])        |
| 439     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(1)               | input_1             | torch.float32 |         | -1.4766946        | 5.7210827        | -0.0000002     | 3.2165136             | torch.Size([2, 256, 128])        |
| 439     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(1)               | output              | torch.float32 |         | 0.0000226         | 32.7307854       | 3.2164645      | 21.7543278            | torch.Size([2, 256, 128])        |
| 440     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0000226         | 32.7307854       | 3.2164645      | 21.7543278            | torch.Size([2, 256, 128])        |
| 440     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(1)     | output              | torch.float32 |         | 3.2164650         | 3.2164650        | 3.2164650      | 0.0000000             | torch.Size([2, 256, 1])          |
| 441     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(1)             | input               | torch.float32 |         | 3.2164650         | 3.2164650        | 3.2164650      | 0.0000000             | torch.Size([2, 256, 1])          |
| 441     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(1)             | output              | torch.float32 |         | 0.5575835         | 0.5575835        | 0.5575835      | 0.0000000             | torch.Size([2, 256, 1])          |
| 442     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(1)           | input_0             | torch.float32 |         | -1.4766946        | 5.7210827        | -0.0000002     | 3.2165136             | torch.Size([2, 256, 128])        |
| 442     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(1)           | input_1             | torch.float32 |         | 0.5575835         | 0.5575835        | 0.5575835      | 0.0000000             | torch.Size([2, 256, 1])          |
| 442     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(1)           | output              | torch.float32 |         | -0.8233805        | 3.1899815        | -0.0000002     | 1.0000122             | torch.Size([2, 256, 128])        |
| 443     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(1)      | input               | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 443     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(1)      | output              | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 444     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(1)        | input_0             | torch.float32 |         | -0.8233805        | 3.1899815        | -0.0000002     | 1.0000122             | torch.Size([2, 256, 128])        |
| 444     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(1)        | input_1             | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 444     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(1)        | output              | torch.float32 |         | -0.9262875        | 3.5264697        | 0.0190043      | 1.0193063             | torch.Size([2, 256, 128])        |
| 445     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(1)        | input               | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 445     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(1)        | output              | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 446     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(1)          | input_0             | torch.float32 |         | -0.9262875        | 3.5264697        | 0.0190043      | 1.0193063             | torch.Size([2, 256, 128])        |
| 446     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(1)          | input_1             | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 446     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(1)          | output              | torch.float32 |         | -0.9123855        | 3.4982846        | 0.0406423      | 0.9876468             | torch.Size([2, 256, 128])        |
| 447     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(1)                   | input               | torch.float32 |         | -0.9123855        | 3.4982846        | 0.0406423      | 0.9876468             | torch.Size([2, 256, 128])        |
| 447     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(1)                   | weight              | torch.float32 |         | -0.4264432        | 0.3183554        | 0.0005866      | 0.0053991             | torch.Size([128, 128])           |
| 447     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(1)                   | bias                | torch.float32 |         | -0.1690418        | 0.1536980        | -0.0166056     | 0.0039884             | torch.Size([128])                |
| 447     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(1)                   | output              | torch.float32 |         | -8.9229841        | 5.5221524        | -0.3625587     | 5.1881099             | torch.Size([2, 256, 128])        |
| 448     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(1)                  | input               | torch.float32 |         | 0.0000000         | 5.5221524        | 0.6454428      | 1.1305894             | torch.Size([2, 256, 128])        |
| 448     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(1)                  | output              | torch.float32 |         | 0.0000000         | 5.5221524        | 0.6454428      | 1.1305894             | torch.Size([2, 256, 128])        |
| 449     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(1)  | input_0             | torch.float32 |         | 0.0000000         | 5.5221524        | 0.6454428      | 1.1305894             | torch.Size([2, 256, 128])        |
| 449     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(1)  | output              | torch.float32 |         | 0.6454428         | 0.6454428        | 0.6454428      | 0.0000000             | torch.Size([2, 256, 1])          |
| 450     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(1)              | input_0             | torch.float32 |         | 0.0000000         | 5.5221524        | 0.6454428      | 1.1305894             | torch.Size([2, 256, 128])        |
| 450     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(1)              | input_1             | torch.float32 |         | 0.6454428         | 0.6454428        | 0.6454428      | 0.0000000             | torch.Size([2, 256, 1])          |
| 450     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(1)              | output              | torch.float32 |         | -0.6454428        | 4.8767095        | -0.0000001     | 1.1305894             | torch.Size([2, 256, 128])        |
| 451     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(1)              | input_0             | torch.float32 |         | -0.6454428        | 4.8767095        | -0.0000001     | 1.1305894             | torch.Size([2, 256, 128])        |
| 451     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(1)              | input_1             | torch.float32 |         | -0.6454428        | 4.8767095        | -0.0000001     | 1.1305894             | torch.Size([2, 256, 128])        |
| 451     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(1)              | output              | torch.float32 |         | 0.0105261         | 23.7822952       | 1.1305721      | 7.8225250             | torch.Size([2, 256, 128])        |
| 452     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(1)    | input_0             | torch.float32 |         | 0.0105261         | 23.7822952       | 1.1305721      | 7.8225250             | torch.Size([2, 256, 128])        |
| 452     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(1)    | output              | torch.float32 |         | 1.1305721         | 1.1305721        | 1.1305721      | 0.0000000             | torch.Size([2, 256, 1])          |
| 453     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(1)            | input               | torch.float32 |         | 1.1305721         | 1.1305721        | 1.1305721      | 0.0000000             | torch.Size([2, 256, 1])          |
| 453     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(1)            | output              | torch.float32 |         | 0.9404787         | 0.9404787        | 0.9404787      | 0.0000000             | torch.Size([2, 256, 1])          |
| 454     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(1)          | input_0             | torch.float32 |         | -0.6454428        | 4.8767095        | -0.0000001     | 1.1305894             | torch.Size([2, 256, 128])        |
| 454     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(1)          | input_1             | torch.float32 |         | 0.9404787         | 0.9404787        | 0.9404787      | 0.0000000             | torch.Size([2, 256, 1])          |
| 454     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(1)          | output              | torch.float32 |         | -0.6070252        | 4.5864415        | 0.0000000      | 1.0000066             | torch.Size([2, 256, 128])        |
| 455     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(1)     | input               | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 455     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(1)     | output              | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 456     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(1)       | input_0             | torch.float32 |         | -0.6070252        | 4.5864415        | 0.0000000      | 1.0000066             | torch.Size([2, 256, 128])        |
| 456     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(1)       | input_1             | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 456     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(1)       | output              | torch.float32 |         | -0.8499647        | 4.6326628        | 0.0158386      | 0.9572933             | torch.Size([2, 256, 128])        |
| 457     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(1)       | input               | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 457     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(1)       | output              | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 458     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(1)         | input_0             | torch.float32 |         | -0.8499647        | 4.6326628        | 0.0158386      | 0.9572933             | torch.Size([2, 256, 128])        |
| 458     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(1)         | input_1             | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 458     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(1)         | output              | torch.float32 |         | -0.8540859        | 4.6532683        | 0.0778289      | 0.9581357             | torch.Size([2, 256, 128])        |
| 459     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 11])         |
| 459     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 3])          |
| 460     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(1)                  | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 3])          |
| 460     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(1)                  | weight              | torch.float32 |         | -0.8288664        | 0.6362330        | 0.0683853      | 0.1118651             | torch.Size([32, 3])              |
| 460     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(1)                  | bias                | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1068659             | torch.Size([32])                 |
| 460     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(1)                  | output              | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1035326             | torch.Size([2, 256, 32])         |
| 461     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(1)                  | input               | torch.float32 |         | 0.0000000         | 0.5432062        | 0.1793103      | 0.0319320             | torch.Size([2, 256, 32])         |
| 461     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(1)                  | output              | torch.float32 |         | 0.0000000         | 0.5432062        | 0.1793103      | 0.0319320             | torch.Size([2, 256, 32])         |
| 462     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(1)  | input_0             | torch.float32 |         | 0.0000000         | 0.5432062        | 0.1793103      | 0.0319320             | torch.Size([2, 256, 32])         |
| 462     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(1)  | output              | torch.float32 |         | 0.1793103         | 0.1793103        | 0.1793103      | 0.0000000             | torch.Size([2, 256, 1])          |
| 463     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(1)              | input_0             | torch.float32 |         | 0.0000000         | 0.5432062        | 0.1793103      | 0.0319320             | torch.Size([2, 256, 32])         |
| 463     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(1)              | input_1             | torch.float32 |         | 0.1793103         | 0.1793103        | 0.1793103      | 0.0000000             | torch.Size([2, 256, 1])          |
| 463     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(1)              | output              | torch.float32 |         | -0.1793103        | 0.3638958        | 0.0000000      | 0.0319320             | torch.Size([2, 256, 32])         |
| 464     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(1)              | input_0             | torch.float32 |         | -0.1793103        | 0.3638958        | 0.0000000      | 0.0319320             | torch.Size([2, 256, 32])         |
| 464     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(1)              | input_1             | torch.float32 |         | -0.1793103        | 0.3638958        | 0.0000000      | 0.0319320             | torch.Size([2, 256, 32])         |
| 464     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(1)              | output              | torch.float32 |         | 0.0004745         | 0.1324202        | 0.0319300      | 0.0011017             | torch.Size([2, 256, 32])         |
| 465     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(1)    | input_0             | torch.float32 |         | 0.0004745         | 0.1324202        | 0.0319300      | 0.0011017             | torch.Size([2, 256, 32])         |
| 465     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(1)    | output              | torch.float32 |         | 0.0319300         | 0.0319300        | 0.0319300      | 0.0000000             | torch.Size([2, 256, 1])          |
| 466     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(1)            | input               | torch.float32 |         | 0.0319300         | 0.0319300        | 0.0319300      | 0.0000000             | torch.Size([2, 256, 1])          |
| 466     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(1)            | output              | torch.float32 |         | 5.5954180         | 5.5954180        | 5.5954180      | 0.0000000             | torch.Size([2, 256, 1])          |
| 467     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(1)          | input_0             | torch.float32 |         | -0.1793103        | 0.3638958        | 0.0000000      | 0.0319320             | torch.Size([2, 256, 32])         |
| 467     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(1)          | input_1             | torch.float32 |         | 5.5954180         | 5.5954180        | 5.5954180      | 0.0000000             | torch.Size([2, 256, 1])          |
| 467     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(1)          | output              | torch.float32 |         | -1.0033162        | 2.0361493        | 0.0000001      | 0.9997480             | torch.Size([2, 256, 32])         |
| 468     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(1)     | input               | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 468     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(1)     | output              | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 469     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(1)       | input_0             | torch.float32 |         | -1.0033162        | 2.0361493        | 0.0000001      | 0.9997480             | torch.Size([2, 256, 32])         |
| 469     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(1)       | input_1             | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 469     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(1)       | output              | torch.float32 |         | -1.1154957        | 2.1535890        | 0.0222734      | 1.0213500             | torch.Size([2, 256, 32])         |
| 470     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(1)       | input               | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 470     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(1)       | output              | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 471     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(1)         | input_0             | torch.float32 |         | -1.1154957        | 2.1535890        | 0.0222734      | 1.0213500             | torch.Size([2, 256, 32])         |
| 471     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(1)         | input_1             | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 471     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(1)         | output              | torch.float32 |         | -1.1061103        | 2.1481097        | 0.0257997      | 0.9518210             | torch.Size([2, 256, 32])         |
| 472     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(1)                  | input               | torch.float32 |         | -1.1061103        | 2.1481097        | 0.0257997      | 0.9518210             | torch.Size([2, 256, 32])         |
| 472     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(1)                  | weight              | torch.float32 |         | -0.5793310        | 0.5422795        | -0.0032135     | 0.0176575             | torch.Size([32, 32])             |
| 472     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(1)                  | bias                | torch.float32 |         | -0.1716317        | 0.2230143        | 0.0007250      | 0.0126328             | torch.Size([32])                 |
| 472     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(1)                  | output              | torch.float32 |         | -3.1219690        | 2.0599034        | -0.2416928     | 1.8172480             | torch.Size([2, 256, 32])         |
| 473     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(1)                  | input               | torch.float32 |         | 0.0000000         | 2.0599034        | 0.4206609      | 0.3537860             | torch.Size([2, 256, 32])         |
| 473     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(1)                  | output              | torch.float32 |         | 0.0000000         | 2.0599034        | 0.4206609      | 0.3537860             | torch.Size([2, 256, 32])         |
| 474     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(1)  | input_0             | torch.float32 |         | 0.0000000         | 2.0599034        | 0.4206609      | 0.3537860             | torch.Size([2, 256, 32])         |
| 474     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(1)  | output              | torch.float32 |         | 0.4206609         | 0.4206609        | 0.4206609      | 0.0000000             | torch.Size([2, 256, 1])          |
| 475     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(1)              | input_0             | torch.float32 |         | 0.0000000         | 2.0599034        | 0.4206609      | 0.3537860             | torch.Size([2, 256, 32])         |
| 475     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(1)              | input_1             | torch.float32 |         | 0.4206609         | 0.4206609        | 0.4206609      | 0.0000000             | torch.Size([2, 256, 1])          |
| 475     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(1)              | output              | torch.float32 |         | -0.4206609        | 1.6392424        | -0.0000000     | 0.3537860             | torch.Size([2, 256, 32])         |
| 476     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(1)              | input_0             | torch.float32 |         | -0.4206609        | 1.6392424        | -0.0000000     | 0.3537860             | torch.Size([2, 256, 32])         |
| 476     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(1)              | input_1             | torch.float32 |         | -0.4206609        | 1.6392424        | -0.0000000     | 0.3537860             | torch.Size([2, 256, 32])         |
| 476     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(1)              | output              | torch.float32 |         | 0.0034400         | 2.6871157        | 0.3537644      | 0.3393317             | torch.Size([2, 256, 32])         |
| 477     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(1)    | input_0             | torch.float32 |         | 0.0034400         | 2.6871157        | 0.3537644      | 0.3393317             | torch.Size([2, 256, 32])         |
| 477     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(1)    | output              | torch.float32 |         | 0.3537644         | 0.3537644        | 0.3537644      | 0.0000000             | torch.Size([2, 256, 1])          |
| 478     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(1)            | input               | torch.float32 |         | 0.3537644         | 0.3537644        | 0.3537644      | 0.0000000             | torch.Size([2, 256, 1])          |
| 478     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(1)            | output              | torch.float32 |         | 1.6812674         | 1.6812674        | 1.6812674      | 0.0000000             | torch.Size([2, 256, 1])          |
| 479     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(1)          | input_0             | torch.float32 |         | -0.4206609        | 1.6392424        | -0.0000000     | 0.3537860             | torch.Size([2, 256, 32])         |
| 479     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(1)          | input_1             | torch.float32 |         | 1.6812674         | 1.6812674        | 1.6812674      | 0.0000000             | torch.Size([2, 256, 1])          |
| 479     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(1)          | output              | torch.float32 |         | -0.7072434        | 2.7560048        | -0.0000000     | 1.0000327             | torch.Size([2, 256, 32])         |
| 480     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(1)     | input               | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 480     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(1)     | output              | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 481     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(1)       | input_0             | torch.float32 |         | -0.7072434        | 2.7560048        | -0.0000000     | 1.0000327             | torch.Size([2, 256, 32])         |
| 481     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(1)       | input_1             | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 481     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(1)       | output              | torch.float32 |         | -0.7725728        | 2.8595815        | 0.0109236      | 1.0183249             | torch.Size([2, 256, 32])         |
| 482     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(1)       | input               | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 482     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(1)       | output              | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 483     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(1)         | input_0             | torch.float32 |         | -0.7725728        | 2.8595815        | 0.0109236      | 1.0183249             | torch.Size([2, 256, 32])         |
| 483     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(1)         | input_1             | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 483     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(1)         | output              | torch.float32 |         | -0.7850935        | 2.8422685        | 0.0206857      | 0.9854196             | torch.Size([2, 256, 32])         |
| 484     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(1)                  | input               | torch.float32 |         | -0.7850935        | 2.8422685        | 0.0206857      | 0.9854196             | torch.Size([2, 256, 32])         |
| 484     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(1)                  | weight              | torch.float32 |         | -0.5712157        | 0.5219681        | -0.0062917     | 0.0166056             | torch.Size([32, 32])             |
| 484     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(1)                  | bias                | torch.float32 |         | -0.1649730        | 0.2318604        | 0.0253026      | 0.0136139             | torch.Size([32])                 |
| 484     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(1)                  | output              | torch.float32 |         | -4.1319027        | 2.0640335        | -0.0856677     | 1.9114702             | torch.Size([2, 256, 32])         |
| 485     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(1)                  | input               | torch.float32 |         | 0.0000000         | 2.0640335        | 0.4793801      | 0.3560334             | torch.Size([2, 256, 32])         |
| 485     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(1)                  | output              | torch.float32 |         | 0.0000000         | 2.0640335        | 0.4793801      | 0.3560334             | torch.Size([2, 256, 32])         |
| 486     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(1)  | input_0             | torch.float32 |         | 0.0000000         | 2.0640335        | 0.4793801      | 0.3560334             | torch.Size([2, 256, 32])         |
| 486     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(1)  | output              | torch.float32 |         | 0.4793800         | 0.4793800        | 0.4793800      | 0.0000000             | torch.Size([2, 256, 1])          |
| 487     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(1)              | input_0             | torch.float32 |         | 0.0000000         | 2.0640335        | 0.4793801      | 0.3560334             | torch.Size([2, 256, 32])         |
| 487     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(1)              | input_1             | torch.float32 |         | 0.4793800         | 0.4793800        | 0.4793800      | 0.0000000             | torch.Size([2, 256, 1])          |
| 487     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(1)              | output              | torch.float32 |         | -0.4793800        | 1.5846535        | 0.0000000      | 0.3560334             | torch.Size([2, 256, 32])         |
| 488     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(1)              | input_0             | torch.float32 |         | -0.4793800        | 1.5846535        | 0.0000000      | 0.3560334             | torch.Size([2, 256, 32])         |
| 488     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(1)              | input_1             | torch.float32 |         | -0.4793800        | 1.5846535        | 0.0000000      | 0.3560334             | torch.Size([2, 256, 32])         |
| 488     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(1)              | output              | torch.float32 |         | 0.0013569         | 2.5111268        | 0.3560116      | 0.3257346             | torch.Size([2, 256, 32])         |
| 489     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(1)    | input_0             | torch.float32 |         | 0.0013569         | 2.5111268        | 0.3560116      | 0.3257346             | torch.Size([2, 256, 32])         |
| 489     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(1)    | output              | torch.float32 |         | 0.3560116         | 0.3560116        | 0.3560116      | 0.0000000             | torch.Size([2, 256, 1])          |
| 490     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(1)            | input               | torch.float32 |         | 0.3560116         | 0.3560116        | 0.3560116      | 0.0000000             | torch.Size([2, 256, 1])          |
| 490     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(1)            | output              | torch.float32 |         | 1.6759529         | 1.6759529        | 1.6759529      | 0.0000000             | torch.Size([2, 256, 1])          |
| 491     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(1)          | input_0             | torch.float32 |         | -0.4793800        | 1.5846535        | 0.0000000      | 0.3560334             | torch.Size([2, 256, 32])         |
| 491     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(1)          | input_1             | torch.float32 |         | 1.6759529         | 1.6759529        | 1.6759529      | 0.0000000             | torch.Size([2, 256, 1])          |
| 491     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(1)          | output              | torch.float32 |         | -0.8034183        | 2.6558046        | 0.0000000      | 1.0000330             | torch.Size([2, 256, 32])         |
| 492     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(1)     | input               | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 492     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(1)     | output              | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 493     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(1)       | input_0             | torch.float32 |         | -0.8034183        | 2.6558046        | 0.0000000      | 1.0000330             | torch.Size([2, 256, 32])         |
| 493     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(1)       | input_1             | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 493     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(1)       | output              | torch.float32 |         | -0.9091064        | 2.8030589        | 0.0115732      | 1.0257865             | torch.Size([2, 256, 32])         |
| 494     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(1)       | input               | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 494     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(1)       | output              | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 495     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(1)         | input_0             | torch.float32 |         | -0.9091064        | 2.8030589        | 0.0115732      | 1.0257865             | torch.Size([2, 256, 32])         |
| 495     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(1)         | input_1             | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 495     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(1)         | output              | torch.float32 |         | -0.8780257        | 2.8279874        | 0.0157693      | 0.9976039             | torch.Size([2, 256, 32])         |
| 496     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(1)                  | input               | torch.float32 |         | -0.8780257        | 2.8279874        | 0.0157693      | 0.9976039             | torch.Size([2, 256, 32])         |
| 496     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(1)                  | weight              | torch.float32 |         | -0.3204980        | 0.3365203        | -0.0020388     | 0.0145364             | torch.Size([32, 32])             |
| 496     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(1)                  | bias                | torch.float32 |         | -0.1559148        | 0.2119379        | 0.0091616      | 0.0105488             | torch.Size([32])                 |
| 496     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(1)                  | output              | torch.float32 |         | -1.5596967        | 2.2504394        | -0.0438393     | 0.8533034             | torch.Size([2, 256, 32])         |
| 497     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(1)                 | input               | torch.float32 |         | 0.0000000         | 2.2504394        | 0.3568923      | 0.3123139             | torch.Size([2, 256, 32])         |
| 497     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(1)                 | output              | torch.float32 |         | 0.0000000         | 2.2504394        | 0.3568923      | 0.3123139             | torch.Size([2, 256, 32])         |
| 498     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(1) | input_0             | torch.float32 |         | 0.0000000         | 2.2504394        | 0.3568923      | 0.3123139             | torch.Size([2, 256, 32])         |
| 498     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(1) | output              | torch.float32 |         | 0.3568923         | 0.3568923        | 0.3568923      | 0.0000000             | torch.Size([2, 256, 1])          |
| 499     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(1)             | input_0             | torch.float32 |         | 0.0000000         | 2.2504394        | 0.3568923      | 0.3123139             | torch.Size([2, 256, 32])         |
| 499     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(1)             | input_1             | torch.float32 |         | 0.3568923         | 0.3568923        | 0.3568923      | 0.0000000             | torch.Size([2, 256, 1])          |
| 499     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(1)             | output              | torch.float32 |         | -0.3568923        | 1.8935471        | -0.0000000     | 0.3123139             | torch.Size([2, 256, 32])         |
| 500     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(1)             | input_0             | torch.float32 |         | -0.3568923        | 1.8935471        | -0.0000000     | 0.3123139             | torch.Size([2, 256, 32])         |
| 500     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(1)             | input_1             | torch.float32 |         | -0.3568923        | 1.8935471        | -0.0000000     | 0.3123139             | torch.Size([2, 256, 32])         |
| 500     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(1)             | output              | torch.float32 |         | 0.0000572         | 3.5855205        | 0.3122948      | 0.4600431             | torch.Size([2, 256, 32])         |
| 501     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000572         | 3.5855205        | 0.3122948      | 0.4600431             | torch.Size([2, 256, 32])         |
| 501     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(1)   | output              | torch.float32 |         | 0.3122948         | 0.3122948        | 0.3122948      | 0.0000000             | torch.Size([2, 256, 1])          |
| 502     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(1)           | input               | torch.float32 |         | 0.3122948         | 0.3122948        | 0.3122948      | 0.0000000             | torch.Size([2, 256, 1])          |
| 502     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(1)           | output              | torch.float32 |         | 1.7894133         | 1.7894133        | 1.7894133      | 0.0000000             | torch.Size([2, 256, 1])          |
| 503     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(1)         | input_0             | torch.float32 |         | -0.3568923        | 1.8935471        | -0.0000000     | 0.3123139             | torch.Size([2, 256, 32])         |
| 503     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(1)         | input_1             | torch.float32 |         | 1.7894133         | 1.7894133        | 1.7894133      | 0.0000000             | torch.Size([2, 256, 1])          |
| 503     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(1)         | output              | torch.float32 |         | -0.6386278        | 3.3883383        | -0.0000000     | 1.0000291             | torch.Size([2, 256, 32])         |
| 504     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(1)    | input               | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 504     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(1)    | output              | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 505     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(1)      | input_0             | torch.float32 |         | -0.6386278        | 3.3883383        | -0.0000000     | 1.0000291             | torch.Size([2, 256, 32])         |
| 505     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(1)      | input_1             | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 505     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(1)      | output              | torch.float32 |         | -1.0607007        | 4.1038327        | -0.0524514     | 1.4243474             | torch.Size([2, 256, 32])         |
| 506     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(1)      | input               | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 506     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(1)      | output              | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 507     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(1)        | input_0             | torch.float32 |         | -1.0607007        | 4.1038327        | -0.0524514     | 1.4243474             | torch.Size([2, 256, 32])         |
| 507     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(1)        | input_1             | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 507     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(1)        | output              | torch.float32 |         | -1.0113386        | 4.0937653        | -0.0078829     | 1.3019068             | torch.Size([2, 256, 32])         |
| 508     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 11])         |
| 508     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 2])          |
| 509     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(1)                   | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 2])          |
| 509     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(1)                   | weight              | torch.float32 |         | -0.7023237        | 0.7394427        | 0.0490668      | 0.1972211             | torch.Size([32, 2])              |
| 509     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(1)                   | bias                | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1641774             | torch.Size([32])                 |
| 509     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(1)                   | output              | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1590565             | torch.Size([2, 256, 32])         |
| 510     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(1)                   | input               | torch.float32 |         | 0.0000000         | 0.6681666        | 0.1227766      | 0.0449755             | torch.Size([2, 256, 32])         |
| 510     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(1)                   | output              | torch.float32 |         | 0.0000000         | 0.6681666        | 0.1227766      | 0.0449755             | torch.Size([2, 256, 32])         |
| 511     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 0.6681666        | 0.1227766      | 0.0449755             | torch.Size([2, 256, 32])         |
| 511     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(1)   | output              | torch.float32 |         | 0.1227766         | 0.1227766        | 0.1227766      | 0.0000000             | torch.Size([2, 256, 1])          |
| 512     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 0.6681666        | 0.1227766      | 0.0449755             | torch.Size([2, 256, 32])         |
| 512     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(1)               | input_1             | torch.float32 |         | 0.1227766         | 0.1227766        | 0.1227766      | 0.0000000             | torch.Size([2, 256, 1])          |
| 512     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(1)               | output              | torch.float32 |         | -0.1227766        | 0.5453900        | 0.0000000      | 0.0449755             | torch.Size([2, 256, 32])         |
| 513     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(1)               | input_0             | torch.float32 |         | -0.1227766        | 0.5453900        | 0.0000000      | 0.0449755             | torch.Size([2, 256, 32])         |
| 513     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(1)               | input_1             | torch.float32 |         | -0.1227766        | 0.5453900        | 0.0000000      | 0.0449755             | torch.Size([2, 256, 32])         |
| 513     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(1)               | output              | torch.float32 |         | 0.0000342         | 0.2974503        | 0.0449728      | 0.0064568             | torch.Size([2, 256, 32])         |
| 514     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0000342         | 0.2974503        | 0.0449728      | 0.0064568             | torch.Size([2, 256, 32])         |
| 514     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(1)     | output              | torch.float32 |         | 0.0449728         | 0.0449728        | 0.0449728      | 0.0000000             | torch.Size([2, 256, 1])          |
| 515     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(1)             | input               | torch.float32 |         | 0.0449728         | 0.0449728        | 0.0449728      | 0.0000000             | torch.Size([2, 256, 1])          |
| 515     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(1)             | output              | torch.float32 |         | 4.7149482         | 4.7149482        | 4.7149482      | 0.0000000             | torch.Size([2, 256, 1])          |
| 516     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(1)           | input_0             | torch.float32 |         | -0.1227766        | 0.5453900        | 0.0000000      | 0.0449755             | torch.Size([2, 256, 32])         |
| 516     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(1)           | input_1             | torch.float32 |         | 4.7149482         | 4.7149482        | 4.7149482      | 0.0000000             | torch.Size([2, 256, 1])          |
| 516     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(1)           | output              | torch.float32 |         | -0.5788852        | 2.5714855        | 0.0000001      | 0.9998387             | torch.Size([2, 256, 32])         |
| 517     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(1)      | input               | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 517     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(1)      | output              | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 518     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(1)        | input_0             | torch.float32 |         | -0.5788852        | 2.5714855        | 0.0000001      | 0.9998387             | torch.Size([2, 256, 32])         |
| 518     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(1)        | input_1             | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 518     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(1)        | output              | torch.float32 |         | -0.6800938        | 2.7844987        | 0.0035399      | 1.0161457             | torch.Size([2, 256, 32])         |
| 519     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(1)        | input               | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 519     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(1)        | output              | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 520     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(1)          | input_0             | torch.float32 |         | -0.6800938        | 2.7844987        | 0.0035399      | 1.0161457             | torch.Size([2, 256, 32])         |
| 520     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(1)          | input_1             | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 520     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(1)          | output              | torch.float32 |         | -0.6470339        | 2.7040372        | 0.0320439      | 0.9298973             | torch.Size([2, 256, 32])         |
| 521     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(1)                   | input               | torch.float32 |         | -0.6470339        | 2.7040372        | 0.0320439      | 0.9298973             | torch.Size([2, 256, 32])         |
| 521     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(1)                   | weight              | torch.float32 |         | -1.0547366        | 0.5812716        | 0.0070099      | 0.0187704             | torch.Size([32, 32])             |
| 521     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(1)                   | bias                | torch.float32 |         | -0.2183180        | 0.1396109        | -0.0140744     | 0.0103446             | torch.Size([32])                 |
| 521     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(1)                   | output              | torch.float32 |         | -3.4549761        | 1.3085088        | -0.5772952     | 1.5233567             | torch.Size([2, 256, 32])         |
| 522     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(1)                   | input               | torch.float32 |         | 0.0000000         | 1.3085088        | 0.2304298      | 0.1220357             | torch.Size([2, 256, 32])         |
| 522     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(1)                   | output              | torch.float32 |         | 0.0000000         | 1.3085088        | 0.2304298      | 0.1220357             | torch.Size([2, 256, 32])         |
| 523     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 1.3085088        | 0.2304298      | 0.1220357             | torch.Size([2, 256, 32])         |
| 523     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(1)   | output              | torch.float32 |         | 0.2304298         | 0.2304298        | 0.2304298      | 0.0000000             | torch.Size([2, 256, 1])          |
| 524     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 1.3085088        | 0.2304298      | 0.1220357             | torch.Size([2, 256, 32])         |
| 524     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(1)               | input_1             | torch.float32 |         | 0.2304298         | 0.2304298        | 0.2304298      | 0.0000000             | torch.Size([2, 256, 1])          |
| 524     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(1)               | output              | torch.float32 |         | -0.2304298        | 1.0780790        | 0.0000000      | 0.1220357             | torch.Size([2, 256, 32])         |
| 525     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(1)               | input_0             | torch.float32 |         | -0.2304298        | 1.0780790        | 0.0000000      | 0.1220357             | torch.Size([2, 256, 32])         |
| 525     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(1)               | input_1             | torch.float32 |         | -0.2304298        | 1.0780790        | 0.0000000      | 0.1220357             | torch.Size([2, 256, 32])         |
| 525     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(1)               | output              | torch.float32 |         | 0.0000836         | 1.1622543        | 0.1220283      | 0.0480896             | torch.Size([2, 256, 32])         |
| 526     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0000836         | 1.1622543        | 0.1220283      | 0.0480896             | torch.Size([2, 256, 32])         |
| 526     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(1)     | output              | torch.float32 |         | 0.1220283         | 0.1220283        | 0.1220283      | 0.0000000             | torch.Size([2, 256, 1])          |
| 527     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(1)             | input               | torch.float32 |         | 0.1220283         | 0.1220283        | 0.1220283      | 0.0000000             | torch.Size([2, 256, 1])          |
| 527     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(1)             | output              | torch.float32 |         | 2.8625426         | 2.8625426        | 2.8625426      | 0.0000000             | torch.Size([2, 256, 1])          |
| 528     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(1)           | input_0             | torch.float32 |         | -0.2304298        | 1.0780790        | 0.0000000      | 0.1220357             | torch.Size([2, 256, 32])         |
| 528     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(1)           | input_1             | torch.float32 |         | 2.8625426         | 2.8625426        | 2.8625426      | 0.0000000             | torch.Size([2, 256, 1])          |
| 528     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(1)           | output              | torch.float32 |         | -0.6596150        | 3.0860469        | 0.0000000      | 0.9999791             | torch.Size([2, 256, 32])         |
| 529     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(1)      | input               | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 529     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(1)      | output              | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 530     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(1)        | input_0             | torch.float32 |         | -0.6596150        | 3.0860469        | 0.0000000      | 0.9999791             | torch.Size([2, 256, 32])         |
| 530     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(1)        | input_1             | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 530     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(1)        | output              | torch.float32 |         | -0.7386482        | 3.1492574        | -0.0011154     | 0.9854336             | torch.Size([2, 256, 32])         |
| 531     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(1)        | input               | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 531     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(1)        | output              | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 532     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(1)          | input_0             | torch.float32 |         | -0.7386482        | 3.1492574        | -0.0011154     | 0.9854336             | torch.Size([2, 256, 32])         |
| 532     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(1)          | input_1             | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 532     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(1)          | output              | torch.float32 |         | -0.7239540        | 3.1084411        | 0.0231288      | 0.9194999             | torch.Size([2, 256, 32])         |
| 533     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(1)                   | input               | torch.float32 |         | -0.7239540        | 3.1084411        | 0.0231288      | 0.9194999             | torch.Size([2, 256, 32])         |
| 533     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(1)                   | weight              | torch.float32 |         | -0.4480607        | 0.3678726        | 0.0004879      | 0.0160908             | torch.Size([32, 32])             |
| 533     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(1)                   | bias                | torch.float32 |         | -0.1861591        | 0.1739754        | 0.0155446      | 0.0137690             | torch.Size([32])                 |
| 533     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(1)                   | output              | torch.float32 |         | -3.6041934        | 1.3044604        | -0.3644730     | 1.7754184             | torch.Size([2, 256, 32])         |
| 534     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(1)                   | input               | torch.float32 |         | 0.0000000         | 1.3044604        | 0.3374410      | 0.1834022             | torch.Size([2, 256, 32])         |
| 534     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(1)                   | output              | torch.float32 |         | 0.0000000         | 1.3044604        | 0.3374410      | 0.1834022             | torch.Size([2, 256, 32])         |
| 535     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 1.3044604        | 0.3374410      | 0.1834022             | torch.Size([2, 256, 32])         |
| 535     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(1)   | output              | torch.float32 |         | 0.3374409         | 0.3374409        | 0.3374409      | 0.0000000             | torch.Size([2, 256, 1])          |
| 536     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 1.3044604        | 0.3374410      | 0.1834022             | torch.Size([2, 256, 32])         |
| 536     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(1)               | input_1             | torch.float32 |         | 0.3374409         | 0.3374409        | 0.3374409      | 0.0000000             | torch.Size([2, 256, 1])          |
| 536     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(1)               | output              | torch.float32 |         | -0.3374409        | 0.9670194        | 0.0000000      | 0.1834022             | torch.Size([2, 256, 32])         |
| 537     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(1)               | input_0             | torch.float32 |         | -0.3374409        | 0.9670194        | 0.0000000      | 0.1834022             | torch.Size([2, 256, 32])         |
| 537     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(1)               | input_1             | torch.float32 |         | -0.3374409        | 0.9670194        | 0.0000000      | 0.1834022             | torch.Size([2, 256, 32])         |
| 537     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(1)               | output              | torch.float32 |         | 0.0005289         | 0.9351266        | 0.1833910      | 0.0551193             | torch.Size([2, 256, 32])         |
| 538     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0005289         | 0.9351266        | 0.1833910      | 0.0551193             | torch.Size([2, 256, 32])         |
| 538     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(1)     | output              | torch.float32 |         | 0.1833910         | 0.1833910        | 0.1833910      | 0.0000000             | torch.Size([2, 256, 1])          |
| 539     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(1)             | input               | torch.float32 |         | 0.1833910         | 0.1833910        | 0.1833910      | 0.0000000             | torch.Size([2, 256, 1])          |
| 539     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(1)             | output              | torch.float32 |         | 2.3350656         | 2.3350656        | 2.3350656      | 0.0000000             | torch.Size([2, 256, 1])          |
| 540     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(1)           | input_0             | torch.float32 |         | -0.3374409        | 0.9670194        | 0.0000000      | 0.1834022             | torch.Size([2, 256, 32])         |
| 540     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(1)           | input_1             | torch.float32 |         | 2.3350656         | 2.3350656        | 2.3350656      | 0.0000000             | torch.Size([2, 256, 1])          |
| 540     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(1)           | output              | torch.float32 |         | -0.7879467        | 2.2580538        | 0.0000000      | 1.0000064             | torch.Size([2, 256, 32])         |
| 541     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(1)      | input               | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 541     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(1)      | output              | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 542     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(1)        | input_0             | torch.float32 |         | -0.7879467        | 2.2580538        | 0.0000000      | 1.0000064             | torch.Size([2, 256, 32])         |
| 542     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(1)        | input_1             | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 542     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(1)        | output              | torch.float32 |         | -0.8738688        | 2.2798314        | -0.0062522     | 0.9921706             | torch.Size([2, 256, 32])         |
| 543     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(1)        | input               | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 543     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(1)        | output              | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 544     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(1)          | input_0             | torch.float32 |         | -0.8738688        | 2.2798314        | -0.0062522     | 0.9921706             | torch.Size([2, 256, 32])         |
| 544     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(1)          | input_1             | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 544     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(1)          | output              | torch.float32 |         | -0.8723854        | 2.2831128        | 0.0009175      | 0.9614892             | torch.Size([2, 256, 32])         |
| 545     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(1)                   | input               | torch.float32 |         | -0.8723854        | 2.2831128        | 0.0009175      | 0.9614892             | torch.Size([2, 256, 32])         |
| 545     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(1)                   | weight              | torch.float32 |         | -0.5597425        | 0.7001730        | 0.0015679      | 0.0160348             | torch.Size([32, 32])             |
| 545     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(1)                   | bias                | torch.float32 |         | -0.1810580        | 0.1736723        | -0.0279047     | 0.0091159             | torch.Size([32])                 |
| 545     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(1)                   | output              | torch.float32 |         | -3.5563838        | 2.8472507        | -0.2507882     | 1.3491338             | torch.Size([2, 256, 32])         |
| 546     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(1)                  | input               | torch.float32 |         | 0.0000000         | 2.8472507        | 0.2831749      | 0.4061674             | torch.Size([2, 256, 32])         |
| 546     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(1)                  | output              | torch.float32 |         | 0.0000000         | 2.8472507        | 0.2831749      | 0.4061674             | torch.Size([2, 256, 32])         |
| 547     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(1)  | input_0             | torch.float32 |         | 0.0000000         | 2.8472507        | 0.2831749      | 0.4061674             | torch.Size([2, 256, 32])         |
| 547     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(1)  | output              | torch.float32 |         | 0.2831749         | 0.2831749        | 0.2831749      | 0.0000000             | torch.Size([2, 256, 1])          |
| 548     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(1)              | input_0             | torch.float32 |         | 0.0000000         | 2.8472507        | 0.2831749      | 0.4061674             | torch.Size([2, 256, 32])         |
| 548     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(1)              | input_1             | torch.float32 |         | 0.2831749         | 0.2831749        | 0.2831749      | 0.0000000             | torch.Size([2, 256, 1])          |
| 548     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(1)              | output              | torch.float32 |         | -0.2831749        | 2.5640759        | -0.0000000     | 0.4061674             | torch.Size([2, 256, 32])         |
| 549     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(1)              | input_0             | torch.float32 |         | -0.2831749        | 2.5640759        | -0.0000000     | 0.4061674             | torch.Size([2, 256, 32])         |
| 549     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(1)              | input_1             | torch.float32 |         | -0.2831749        | 2.5640759        | -0.0000000     | 0.4061674             | torch.Size([2, 256, 32])         |
| 549     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(1)              | output              | torch.float32 |         | 0.0019748         | 6.5744853        | 0.4061426      | 1.6275870             | torch.Size([2, 256, 32])         |
| 550     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(1)    | input_0             | torch.float32 |         | 0.0019748         | 6.5744853        | 0.4061426      | 1.6275870             | torch.Size([2, 256, 32])         |
| 550     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(1)    | output              | torch.float32 |         | 0.4061426         | 0.4061426        | 0.4061426      | 0.0000000             | torch.Size([2, 256, 1])          |
| 551     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(1)            | input               | torch.float32 |         | 0.4061426         | 0.4061426        | 0.4061426      | 0.0000000             | torch.Size([2, 256, 1])          |
| 551     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(1)            | output              | torch.float32 |         | 1.5691172         | 1.5691172        | 1.5691172      | 0.0000000             | torch.Size([2, 256, 1])          |
| 552     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(1)          | input_0             | torch.float32 |         | -0.2831749        | 2.5640759        | -0.0000000     | 0.4061674             | torch.Size([2, 256, 32])         |
| 552     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(1)          | input_1             | torch.float32 |         | 1.5691172         | 1.5691172        | 1.5691172      | 0.0000000             | torch.Size([2, 256, 1])          |
| 552     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(1)          | output              | torch.float32 |         | -0.4443346        | 4.0233355        | -0.0000000     | 1.0000364             | torch.Size([2, 256, 32])         |
| 553     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(1)     | input               | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 553     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(1)     | output              | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 554     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(1)       | input_0             | torch.float32 |         | -0.4443346        | 4.0233355        | -0.0000000     | 1.0000364             | torch.Size([2, 256, 32])         |
| 554     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(1)       | input_1             | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 554     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(1)       | output              | torch.float32 |         | -0.6526539        | 3.3650775        | -0.0730665     | 0.8091654             | torch.Size([2, 256, 32])         |
| 555     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(1)       | input               | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 555     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(1)       | output              | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 556     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(1)         | input_0             | torch.float32 |         | -0.6526539        | 3.3650775        | -0.0730665     | 0.8091654             | torch.Size([2, 256, 32])         |
| 556     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(1)         | input_1             | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 556     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(1)         | output              | torch.float32 |         | -0.5349309        | 3.2157838        | 0.0073125      | 0.6968725             | torch.Size([2, 256, 32])         |
| 557     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 11])         |
| 557     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 3])          |
| 558     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(1)                   | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 3])          |
| 558     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(1)                   | weight              | torch.float32 |         | -1.0475703        | 0.9848034        | -0.0054673     | 0.2080412             | torch.Size([64, 3])              |
| 558     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(1)                   | bias                | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1294928             | torch.Size([64])                 |
| 558     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(1)                   | output              | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1274733             | torch.Size([2, 256, 64])         |
| 559     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(1)                   | input               | torch.float32 |         | 0.0000000         | 0.5068271        | 0.1285947      | 0.0274216             | torch.Size([2, 256, 64])         |
| 559     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(1)                   | output              | torch.float32 |         | 0.0000000         | 0.5068271        | 0.1285947      | 0.0274216             | torch.Size([2, 256, 64])         |
| 560     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 0.5068271        | 0.1285947      | 0.0274216             | torch.Size([2, 256, 64])         |
| 560     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(1)   | output              | torch.float32 |         | 0.1285947         | 0.1285947        | 0.1285947      | 0.0000000             | torch.Size([2, 256, 1])          |
| 561     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 0.5068271        | 0.1285947      | 0.0274216             | torch.Size([2, 256, 64])         |
| 561     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(1)               | input_1             | torch.float32 |         | 0.1285947         | 0.1285947        | 0.1285947      | 0.0000000             | torch.Size([2, 256, 1])          |
| 561     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(1)               | output              | torch.float32 |         | -0.1285947        | 0.3782324        | 0.0000000      | 0.0274216             | torch.Size([2, 256, 64])         |
| 562     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(1)               | input_0             | torch.float32 |         | -0.1285947        | 0.3782324        | 0.0000000      | 0.0274216             | torch.Size([2, 256, 64])         |
| 562     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(1)               | input_1             | torch.float32 |         | -0.1285947        | 0.3782324        | 0.0000000      | 0.0274216             | torch.Size([2, 256, 64])         |
| 562     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(1)               | output              | torch.float32 |         | 0.0001123         | 0.1430598        | 0.0274208      | 0.0013717             | torch.Size([2, 256, 64])         |
| 563     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0001123         | 0.1430598        | 0.0274208      | 0.0013717             | torch.Size([2, 256, 64])         |
| 563     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(1)     | output              | torch.float32 |         | 0.0274208         | 0.0274208        | 0.0274208      | 0.0000000             | torch.Size([2, 256, 1])          |
| 564     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(1)             | input               | torch.float32 |         | 0.0274208         | 0.0274208        | 0.0274208      | 0.0000000             | torch.Size([2, 256, 1])          |
| 564     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(1)             | output              | torch.float32 |         | 6.0378308         | 6.0378308        | 6.0378308      | 0.0000000             | torch.Size([2, 256, 1])          |
| 565     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(1)           | input_0             | torch.float32 |         | -0.1285947        | 0.3782324        | 0.0000000      | 0.0274216             | torch.Size([2, 256, 64])         |
| 565     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(1)           | input_1             | torch.float32 |         | 6.0378308         | 6.0378308        | 6.0378308      | 0.0000000             | torch.Size([2, 256, 1])          |
| 565     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(1)           | output              | torch.float32 |         | -0.7764331        | 2.2837033        | 0.0000001      | 0.9996659             | torch.Size([2, 256, 64])         |
| 566     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(1)      | input               | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 566     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(1)      | output              | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 567     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(1)        | input_0             | torch.float32 |         | -0.7764331        | 2.2837033        | 0.0000001      | 0.9996659             | torch.Size([2, 256, 64])         |
| 567     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(1)        | input_1             | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 567     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(1)        | output              | torch.float32 |         | -0.8134952        | 2.2592006        | 0.0106135      | 0.9452497             | torch.Size([2, 256, 64])         |
| 568     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(1)        | input               | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 568     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(1)        | output              | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 569     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(1)          | input_0             | torch.float32 |         | -0.8134952        | 2.2592006        | 0.0106135      | 0.9452497             | torch.Size([2, 256, 64])         |
| 569     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(1)          | input_1             | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 569     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(1)          | output              | torch.float32 |         | -0.7752808        | 2.1768949        | 0.0410675      | 0.8370529             | torch.Size([2, 256, 64])         |
| 570     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(1)                   | input               | torch.float32 |         | -0.7752808        | 2.1768949        | 0.0410675      | 0.8370529             | torch.Size([2, 256, 64])         |
| 570     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(1)                   | weight              | torch.float32 |         | -0.4523612        | 0.4813256        | -0.0014562     | 0.0096743             | torch.Size([64, 64])             |
| 570     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(1)                   | bias                | torch.float32 |         | -0.1183558        | 0.2243176        | 0.0150283      | 0.0049289             | torch.Size([64])                 |
| 570     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(1)                   | output              | torch.float32 |         | -5.0684557        | 2.1883044        | -0.4654979     | 2.9017596             | torch.Size([2, 256, 64])         |
| 571     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(1)                   | input               | torch.float32 |         | 0.0000000         | 2.1883044        | 0.3842481      | 0.2487988             | torch.Size([2, 256, 64])         |
| 571     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(1)                   | output              | torch.float32 |         | 0.0000000         | 2.1883044        | 0.3842481      | 0.2487988             | torch.Size([2, 256, 64])         |
| 572     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 2.1883044        | 0.3842481      | 0.2487988             | torch.Size([2, 256, 64])         |
| 572     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(1)   | output              | torch.float32 |         | 0.3842481         | 0.3842481        | 0.3842481      | 0.0000000             | torch.Size([2, 256, 1])          |
| 573     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 2.1883044        | 0.3842481      | 0.2487988             | torch.Size([2, 256, 64])         |
| 573     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(1)               | input_1             | torch.float32 |         | 0.3842481         | 0.3842481        | 0.3842481      | 0.0000000             | torch.Size([2, 256, 1])          |
| 573     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(1)               | output              | torch.float32 |         | -0.3842481        | 1.8040563        | -0.0000000     | 0.2487988             | torch.Size([2, 256, 64])         |
| 574     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(1)               | input_0             | torch.float32 |         | -0.3842481        | 1.8040563        | -0.0000000     | 0.2487988             | torch.Size([2, 256, 64])         |
| 574     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(1)               | input_1             | torch.float32 |         | -0.3842481        | 1.8040563        | -0.0000000     | 0.2487988             | torch.Size([2, 256, 64])         |
| 574     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(1)               | output              | torch.float32 |         | 0.0000027         | 3.2546191        | 0.2487912      | 0.2572902             | torch.Size([2, 256, 64])         |
| 575     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0000027         | 3.2546191        | 0.2487912      | 0.2572902             | torch.Size([2, 256, 64])         |
| 575     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(1)     | output              | torch.float32 |         | 0.2487912         | 0.2487912        | 0.2487912      | 0.0000000             | torch.Size([2, 256, 1])          |
| 576     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(1)             | input               | torch.float32 |         | 0.2487912         | 0.2487912        | 0.2487912      | 0.0000000             | torch.Size([2, 256, 1])          |
| 576     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(1)             | output              | torch.float32 |         | 2.0048125         | 2.0048125        | 2.0048125      | 0.0000000             | torch.Size([2, 256, 1])          |
| 577     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(1)           | input_0             | torch.float32 |         | -0.3842481        | 1.8040563        | -0.0000000     | 0.2487988             | torch.Size([2, 256, 64])         |
| 577     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(1)           | input_1             | torch.float32 |         | 2.0048125         | 2.0048125        | 2.0048125      | 0.0000000             | torch.Size([2, 256, 1])          |
| 577     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(1)           | output              | torch.float32 |         | -0.7703454        | 3.6167946        | -0.0000000     | 0.9999903             | torch.Size([2, 256, 64])         |
| 578     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(1)      | input               | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 578     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(1)      | output              | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 579     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(1)        | input_0             | torch.float32 |         | -0.7703454        | 3.6167946        | -0.0000000     | 0.9999903             | torch.Size([2, 256, 64])         |
| 579     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(1)        | input_1             | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 579     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(1)        | output              | torch.float32 |         | -0.8240583        | 3.9466136        | 0.0073575      | 1.0168505             | torch.Size([2, 256, 64])         |
| 580     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(1)        | input               | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 580     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(1)        | output              | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 581     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(1)          | input_0             | torch.float32 |         | -0.8240583        | 3.9466136        | 0.0073575      | 1.0168505             | torch.Size([2, 256, 64])         |
| 581     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(1)          | input_1             | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 581     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(1)          | output              | torch.float32 |         | -0.7902251        | 3.9296980        | 0.0238518      | 0.9615389             | torch.Size([2, 256, 64])         |
| 582     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(1)                   | input               | torch.float32 |         | -0.7902251        | 3.9296980        | 0.0238518      | 0.9615389             | torch.Size([2, 256, 64])         |
| 582     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(1)                   | weight              | torch.float32 |         | -0.5707353        | 0.3620123        | -0.0010372     | 0.0088292             | torch.Size([64, 64])             |
| 582     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(1)                   | bias                | torch.float32 |         | -0.1720246        | 0.1340137        | -0.0235144     | 0.0050507             | torch.Size([64])                 |
| 582     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(1)                   | output              | torch.float32 |         | -5.1970620        | 3.6431391        | -0.3962240     | 2.7808344             | torch.Size([2, 256, 64])         |
| 583     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(1)                   | input               | torch.float32 |         | 0.0000000         | 3.6431391        | 0.4959931      | 0.6320692             | torch.Size([2, 256, 64])         |
| 583     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(1)                   | output              | torch.float32 |         | 0.0000000         | 3.6431391        | 0.4959931      | 0.6320692             | torch.Size([2, 256, 64])         |
| 584     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(1)   | input_0             | torch.float32 |         | 0.0000000         | 3.6431391        | 0.4959931      | 0.6320692             | torch.Size([2, 256, 64])         |
| 584     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(1)   | output              | torch.float32 |         | 0.4959931         | 0.4959931        | 0.4959931      | 0.0000000             | torch.Size([2, 256, 1])          |
| 585     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(1)               | input_0             | torch.float32 |         | 0.0000000         | 3.6431391        | 0.4959931      | 0.6320692             | torch.Size([2, 256, 64])         |
| 585     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(1)               | input_1             | torch.float32 |         | 0.4959931         | 0.4959931        | 0.4959931      | 0.0000000             | torch.Size([2, 256, 1])          |
| 585     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(1)               | output              | torch.float32 |         | -0.4959931        | 3.1471460        | 0.0000000      | 0.6320692             | torch.Size([2, 256, 64])         |
| 586     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(1)               | input_0             | torch.float32 |         | -0.4959931        | 3.1471460        | 0.0000000      | 0.6320692             | torch.Size([2, 256, 64])         |
| 586     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(1)               | input_1             | torch.float32 |         | -0.4959931        | 3.1471460        | 0.0000000      | 0.6320692             | torch.Size([2, 256, 64])         |
| 586     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(1)               | output              | torch.float32 |         | 0.0202767         | 9.9045277        | 0.6320499      | 1.8688437             | torch.Size([2, 256, 64])         |
| 587     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(1)     | input_0             | torch.float32 |         | 0.0202767         | 9.9045277        | 0.6320499      | 1.8688437             | torch.Size([2, 256, 64])         |
| 587     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(1)     | output              | torch.float32 |         | 0.6320499         | 0.6320499        | 0.6320499      | 0.0000000             | torch.Size([2, 256, 1])          |
| 588     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(1)             | input               | torch.float32 |         | 0.6320499         | 0.6320499        | 0.6320499      | 0.0000000             | torch.Size([2, 256, 1])          |
| 588     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(1)             | output              | torch.float32 |         | 1.2578268         | 1.2578268        | 1.2578268      | 0.0000000             | torch.Size([2, 256, 1])          |
| 589     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(1)           | input_0             | torch.float32 |         | -0.4959931        | 3.1471460        | 0.0000000      | 0.6320692             | torch.Size([2, 256, 64])         |
| 589     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(1)           | input_1             | torch.float32 |         | 1.2578268         | 1.2578268        | 1.2578268      | 0.0000000             | torch.Size([2, 256, 1])          |
| 589     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(1)           | output              | torch.float32 |         | -0.6238735        | 3.9585645        | -0.0000001     | 1.0000145             | torch.Size([2, 256, 64])         |
| 590     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(1)      | input               | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 590     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(1)      | output              | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 591     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(1)        | input_0             | torch.float32 |         | -0.6238735        | 3.9585645        | -0.0000001     | 1.0000145             | torch.Size([2, 256, 64])         |
| 591     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(1)        | input_1             | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 591     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(1)        | output              | torch.float32 |         | -0.7171651        | 4.0930328        | 0.0121245      | 1.0184869             | torch.Size([2, 256, 64])         |
| 592     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(1)        | input               | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 592     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(1)        | output              | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 593     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(1)          | input_0             | torch.float32 |         | -0.7171651        | 4.0930328        | 0.0121245      | 1.0184869             | torch.Size([2, 256, 64])         |
| 593     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(1)          | input_1             | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 593     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(1)          | output              | torch.float32 |         | -0.6968011        | 4.0786624        | 0.0254073      | 0.9968722             | torch.Size([2, 256, 64])         |
| 594     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(1)                   | input               | torch.float32 |         | -0.6968011        | 4.0786624        | 0.0254073      | 0.9968722             | torch.Size([2, 256, 64])         |
| 594     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(1)                   | weight              | torch.float32 |         | -0.5701389        | 0.3477888        | 0.0006721      | 0.0085883             | torch.Size([64, 64])             |
| 594     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(1)                   | bias                | torch.float32 |         | -0.1677032        | 0.1709885        | -0.0237130     | 0.0070098             | torch.Size([64])                 |
| 594     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(1)                   | output              | torch.float32 |         | -4.2088842        | 7.1198206        | -0.6338486     | 2.1093760             | torch.Size([2, 256, 64])         |
| 595     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(1)                  | input               | torch.float32 |         | 0.0000000         | 7.1198206        | 0.2196430      | 0.8057319             | torch.Size([2, 256, 64])         |
| 595     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(1)                  | output              | torch.float32 |         | 0.0000000         | 7.1198206        | 0.2196430      | 0.8057319             | torch.Size([2, 256, 64])         |
| 596     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(1)  | input_0             | torch.float32 |         | 0.0000000         | 7.1198206        | 0.2196430      | 0.8057319             | torch.Size([2, 256, 64])         |
| 596     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(1)  | output              | torch.float32 |         | 0.2196430         | 0.2196430        | 0.2196430      | 0.0000000             | torch.Size([2, 256, 1])          |
| 597     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(1)              | input_0             | torch.float32 |         | 0.0000000         | 7.1198206        | 0.2196430      | 0.8057319             | torch.Size([2, 256, 64])         |
| 597     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(1)              | input_1             | torch.float32 |         | 0.2196430         | 0.2196430        | 0.2196430      | 0.0000000             | torch.Size([2, 256, 1])          |
| 597     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(1)              | output              | torch.float32 |         | -0.2196430        | 6.9001775        | -0.0000000     | 0.8057319             | torch.Size([2, 256, 64])         |
| 598     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(1)              | input_0             | torch.float32 |         | -0.2196430        | 6.9001775        | -0.0000000     | 0.8057319             | torch.Size([2, 256, 64])         |
| 598     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(1)              | input_1             | torch.float32 |         | -0.2196430        | 6.9001775        | -0.0000000     | 0.8057319             | torch.Size([2, 256, 64])         |
| 598     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(1)              | output              | torch.float32 |         | 0.0045184         | 47.6124496       | 0.8057072      | 34.7802696            | torch.Size([2, 256, 64])         |
| 599     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(1)    | input_0             | torch.float32 |         | 0.0045184         | 47.6124496       | 0.8057072      | 34.7802696            | torch.Size([2, 256, 64])         |
| 599     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(1)    | output              | torch.float32 |         | 0.8057072         | 0.8057072        | 0.8057072      | 0.0000000             | torch.Size([2, 256, 1])          |
| 600     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(1)            | input               | torch.float32 |         | 0.8057072         | 0.8057072        | 0.8057072      | 0.0000000             | torch.Size([2, 256, 1])          |
| 600     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(1)            | output              | torch.float32 |         | 1.1140603         | 1.1140603        | 1.1140603      | 0.0000000             | torch.Size([2, 256, 1])          |
| 601     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(1)          | input_0             | torch.float32 |         | -0.2196430        | 6.9001775        | -0.0000000     | 0.8057319             | torch.Size([2, 256, 64])         |
| 601     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(1)          | input_1             | torch.float32 |         | 1.1140603         | 1.1140603        | 1.1140603      | 0.0000000             | torch.Size([2, 256, 1])          |
| 601     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(1)          | output              | torch.float32 |         | -0.2446956        | 7.6872139        | -0.0000001     | 1.0000182             | torch.Size([2, 256, 64])         |
| 602     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(1)     | input               | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 602     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(1)     | output              | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 603     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(1)       | input_0             | torch.float32 |         | -0.2446956        | 7.6872139        | -0.0000001     | 1.0000182             | torch.Size([2, 256, 64])         |
| 603     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(1)       | input_1             | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 603     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(1)       | output              | torch.float32 |         | -0.3138221        | 5.6094851        | -0.0418145     | 0.5680106             | torch.Size([2, 256, 64])         |
| 604     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(1)       | input               | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 604     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(1)       | output              | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 605     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(1)         | input_0             | torch.float32 |         | -0.3138221        | 5.6094851        | -0.0418145     | 0.5680106             | torch.Size([2, 256, 64])         |
| 605     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(1)         | input_1             | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 605     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(1)         | output              | torch.float32 |         | -0.3565792        | 5.3709445        | 0.0481909      | 0.4860508             | torch.Size([2, 256, 64])         |
| 606     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_0             | torch.float32 |         | -0.8540859        | 4.6532683        | 0.0778289      | 0.9581357             | torch.Size([2, 256, 128])        |
| 606     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_1             | torch.float32 |         | -1.0113386        | 4.0937653        | -0.0078829     | 1.3019068             | torch.Size([2, 256, 32])         |
| 606     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_2             | torch.float32 |         | -0.5349309        | 3.2157838        | 0.0073125      | 0.6968725             | torch.Size([2, 256, 32])         |
| 606     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | input_3             | torch.float32 |         | -0.3565792        | 5.3709445        | 0.0481909      | 0.4860508             | torch.Size([2, 256, 64])         |
| 606     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(1)                        | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0508909      | 0.8514420             | torch.Size([2, 256, 256])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | input_0             | torch.float32 |         | -0.8671875        | 0.8359375        | -0.1171943     | 0.0536020             | torch.Size([12, 3, 256, 704])    |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_0            | torch.float32 |         | -0.7109375        | 0.8359375        | -0.0736803     | 0.0375602             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_1            | torch.float32 |         | -0.7578125        | 0.8125000        | -0.1215375     | 0.0386390             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_2            | torch.float32 |         | -0.7656250        | 0.6796875        | -0.0698674     | 0.0240641             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_3            | torch.float32 |         | -0.6093750        | 0.8281250        | -0.0708556     | 0.0246479             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_4            | torch.float32 |         | -0.8437500        | 0.8281250        | -0.0984946     | 0.0571401             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_5            | torch.float32 |         | -0.7812500        | 0.8281250        | -0.0661624     | 0.0312031             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_6            | torch.float32 |         | -0.8671875        | 0.8203125        | -0.1705534     | 0.0695115             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_7            | torch.float32 |         | -0.8359375        | 0.8359375        | -0.1157308     | 0.0423470             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_8            | torch.float32 |         | -0.8437500        | 0.8359375        | -0.1084403     | 0.0604877             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_9            | torch.float32 |         | -0.8671875        | 0.8203125        | -0.1853776     | 0.0802727             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_10           | torch.float32 |         | -0.8593750        | 0.8359375        | -0.1273327     | 0.0596137             | torch.Size([3, 256, 704])        |
| 607     | torch.Tensor.unbind                                                               | head                                              | output_11           | torch.float32 |         | -0.8593750        | 0.8359375        | -0.1982997     | 0.0942286             | torch.Size([3, 256, 704])        |
| 608     | torch.Tensor.double                                                               | head                                              | input               | torch.float64 |         | -646.5387754      | 667.1202547      | -52.6064262    | 47748.4609375         | torch.Size([12, 4, 4])           |
| 608     | torch.Tensor.double                                                               | head                                              | output              | torch.float64 |         | -646.5387754      | 667.1202547      | -52.6064262    | 47748.4609375         | torch.Size([12, 4, 4])           |
| 609     | torch.matmul                                                                      | head                                              | input_0             | torch.float64 |         | -1.0000000        | 1.0000000        | 0.0006658      | 0.2513128             | torch.Size([12, 4, 4])           |
| 609     | torch.matmul                                                                      | head                                              | input_1             | torch.float64 |         | -646.5387754      | 667.1202547      | -52.6064262    | 47748.4609375         | torch.Size([12, 4, 4])           |
| 609     | torch.matmul                                                                      | head                                              | output              | torch.float64 |         | -4.3784015        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([12, 4, 4])           |
| 610     | torch.Tensor.view                                                                 | head                                              | input_0             | torch.float64 |         | -4.3784015        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([12, 4, 4])           |
| 610     | torch.Tensor.view                                                                 | head                                              | output              | torch.float64 |         | -4.3784015        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 611     | torch.Tensor.float                                                                | head                                              | input               | torch.float64 |         | -4.3784015        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 611     | torch.Tensor.float                                                                | head                                              | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 612     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.mat_quant_stub                               | input               | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 612     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.mat_quant_stub                               | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 613     | torch.nn.modules.linear.Linear                                                    | head.fc_before                                    | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 613     | torch.nn.modules.linear.Linear                                                    | head.fc_before                                    | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 613     | torch.nn.modules.linear.Linear                                                    | head.fc_before                                    | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 614     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.0.query_cat                           | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 512, 256])        |
| 614     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.0.query_cat                           | input_1             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0540784      | 0.8482460             | torch.Size([2, 512, 256])        |
| 614     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.0.query_cat                           | output              | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0270392      | 0.4248533             | torch.Size([2, 512, 512])        |
| 615     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.0.key_cat                             | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 615     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.0.key_cat                             | input_1             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0508909      | 0.8514420             | torch.Size([2, 256, 256])        |
| 615     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.0.key_cat                             | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 616     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0270392      | 0.4248533             | torch.Size([2, 512, 512])        |
| 616     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0270392      | 0.4248533             | torch.Size([512, 2, 512])        |
| 617     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 617     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 618     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 618     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 619     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | input_0             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0270392      | 0.4248533             | torch.Size([512, 2, 512])        |
| 619     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | output              | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0270392      | 0.4248533             | torch.Size([512, 2, 512])        |
| 620     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 620     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 621     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 621     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 622     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.q_proj                         | input               | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0270392      | 0.4248533             | torch.Size([512, 2, 512])        |
| 622     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.q_proj                         | weight              | torch.float32 |         | -0.2786695        | 0.2698635        | 0.0002171      | 0.0036005             | torch.Size([512, 512])           |
| 622     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.q_proj                         | bias                | torch.float32 |         | -0.1025436        | 0.1140026        | -0.0003242     | 0.0019732             | torch.Size([512])                |
| 622     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.q_proj                         | output              | torch.float32 |         | -9.4644670        | 9.0727606        | -0.0663809     | 5.4071217             | torch.Size([512, 2, 512])        |
| 623     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.k_proj                         | input               | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 623     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.k_proj                         | weight              | torch.float32 |         | -0.2842779        | 0.2792765        | -0.0001027     | 0.0036413             | torch.Size([512, 512])           |
| 623     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.k_proj                         | bias                | torch.float32 |         | -0.0096402        | 0.0094814        | 0.0000140      | 0.0000141             | torch.Size([512])                |
| 623     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.k_proj                         | output              | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([256, 2, 512])        |
| 624     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.v_proj                         | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 624     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.v_proj                         | weight              | torch.float32 |         | -0.1630211        | 0.1449102        | 0.0001645      | 0.0010630             | torch.Size([512, 512])           |
| 624     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.v_proj                         | bias                | torch.float32 |         | -0.0888495        | 0.0985312        | -0.0008267     | 0.0008712             | torch.Size([512])                |
| 624     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.v_proj                         | output              | torch.float32 |         | -0.0888495        | 0.0985312        | -0.0008267     | 0.0008695             | torch.Size([256, 2, 512])        |
| 625     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | input_0             | torch.float32 |         | -9.4644670        | 9.0727606        | -0.0663809     | 5.4071217             | torch.Size([512, 2, 512])        |
| 625     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | output              | torch.float32 |         | -9.4644670        | 9.0727606        | -0.0663809     | 5.4071217             | torch.Size([512, 16, 64])        |
| 626     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -9.4644670        | 9.0727606        | -0.0663809     | 5.4071217             | torch.Size([512, 16, 64])        |
| 626     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -9.4644670        | 9.0727606        | -0.0663809     | 5.4071217             | torch.Size([16, 512, 64])        |
| 627     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | input_0             | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([256, 2, 512])        |
| 627     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | output              | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([256, 16, 64])        |
| 628     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([256, 16, 64])        |
| 628     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([16, 256, 64])        |
| 629     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | input_0             | torch.float32 |         | -0.0888495        | 0.0985312        | -0.0008267     | 0.0008695             | torch.Size([256, 2, 512])        |
| 629     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | output              | torch.float32 |         | -0.0888495        | 0.0985312        | -0.0008267     | 0.0008695             | torch.Size([256, 16, 64])        |
| 630     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -0.0888495        | 0.0985312        | -0.0008267     | 0.0008695             | torch.Size([256, 16, 64])        |
| 630     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -0.0888495        | 0.0985312        | -0.0008267     | 0.0008695             | torch.Size([16, 256, 64])        |
| 631     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.0.attn.q_scale_mul                    | input_0             | torch.float32 |         | -9.4644670        | 9.0727606        | -0.0663809     | 5.4071217             | torch.Size([16, 512, 64])        |
| 631     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.0.attn.q_scale_mul                    | output              | torch.float32 |         | -1.1830584        | 1.1340951        | -0.0082976     | 0.0844863             | torch.Size([16, 512, 64])        |
| 632     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([16, 256, 64])        |
| 632     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([16, 64, 256])        |
| 633     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.0.attn.matmul                         | input_0             | torch.float32 |         | -1.1830584        | 1.1340951        | -0.0082976     | 0.0844863             | torch.Size([16, 512, 64])        |
| 633     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.0.attn.matmul                         | input_1             | torch.float32 |         | -6.6910477        | 7.0933571        | 0.1357392      | 6.6740856             | torch.Size([16, 64, 256])        |
| 633     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.0.attn.matmul                         | output              | torch.float32 |         | -79.8859558       | 85.5058517       | 1.8801069      | 742.0394897           | torch.Size([16, 512, 256])       |
| 634     | torch.Tensor.max                                                                  | head.layers.0.attn.softmax                        | input               | torch.float32 |         | -79.8859558       | 85.5058517       | 1.8801069      | 742.0394897           | torch.Size([16, 512, 256])       |
| 634     | torch.Tensor.max                                                                  | head.layers.0.attn.softmax                        | output_0            | torch.float32 |         | -79.8859558       | 85.5058517       | 1.8801066      | 742.1296997           | torch.Size([16, 512, 1])         |
| 634     | torch.Tensor.max                                                                  | head.layers.0.attn.softmax                        | output_1            | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 1])         |
| 635     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.0.attn.softmax.sub                    | input_0             | torch.float32 |         | -79.8859558       | 85.5058517       | 1.8801069      | 742.0394897           | torch.Size([16, 512, 256])       |
| 635     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.0.attn.softmax.sub                    | input_1             | torch.float32 |         | -79.8859558       | 85.5058517       | 1.8801066      | 742.1296997           | torch.Size([16, 512, 1])         |
| 635     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.0.attn.softmax.sub                    | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 636     | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.0.attn.softmax.exp                    | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 636     | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.0.attn.softmax.exp                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 637     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.0.attn.softmax.sum                    | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 637     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.0.attn.softmax.sum                    | output              | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 638     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.0.attn.softmax.reciprocal             | input               | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 638     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.0.attn.softmax.reciprocal             | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 639     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.0.attn.softmax.mul                    | input_0             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 639     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.0.attn.softmax.mul                    | input_1             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 639     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.0.attn.softmax.mul                    | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 640     | torch.nn.modules.dropout.Dropout                                                  | head.layers.0.attn.attention_drop                 | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 640     | torch.nn.modules.dropout.Dropout                                                  | head.layers.0.attn.attention_drop                 | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 641     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.0.attn.attn_matmul                    | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 641     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.0.attn.attn_matmul                    | input_1             | torch.float32 |         | -0.0888495        | 0.0985312        | -0.0008267     | 0.0008695             | torch.Size([16, 256, 64])        |
| 641     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.0.attn.attn_matmul                    | output              | torch.float32 |         | -0.0888495        | 0.0985313        | -0.0008267     | 0.0008695             | torch.Size([16, 512, 64])        |
| 642     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -0.0888495        | 0.0985313        | -0.0008267     | 0.0008695             | torch.Size([16, 512, 64])        |
| 642     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -0.0888495        | 0.0985313        | -0.0008267     | 0.0008695             | torch.Size([512, 16, 64])        |
| 643     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | input_0             | torch.float32 |         | -0.0888495        | 0.0985313        | -0.0008267     | 0.0008695             | torch.Size([512, 16, 64])        |
| 643     | torch.Tensor.reshape                                                              | head.layers.0.attn                                | output              | torch.float32 |         | -0.0888495        | 0.0985313        | -0.0008267     | 0.0008695             | torch.Size([512, 2, 512])        |
| 644     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.out_proj                       | input               | torch.float32 |         | -0.0888495        | 0.0985313        | -0.0008267     | 0.0008695             | torch.Size([512, 2, 512])        |
| 644     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.out_proj                       | weight              | torch.float32 |         | -0.1874478        | 0.1759859        | -0.0001105     | 0.0022686             | torch.Size([512, 512])           |
| 644     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.out_proj                       | bias                | torch.float32 |         | -0.3150745        | 0.2518794        | 0.0131974      | 0.0093190             | torch.Size([512])                |
| 644     | torch.nn.modules.linear.Linear                                                    | head.layers.0.attn.out_proj                       | output              | torch.float32 |         | -0.6143208        | 0.4610773        | 0.0225451      | 0.0190544             | torch.Size([512, 2, 512])        |
| 645     | torch.Tensor.view                                                                 | head.layers.0.attn                                | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 645     | torch.Tensor.view                                                                 | head.layers.0.attn                                | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 646     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.0.attn.attn_weights_mean              | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 646     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.0.attn.attn_weights_mean              | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 512, 256])        |
| 647     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | input_0             | torch.float32 |         | -0.6143208        | 0.4610773        | 0.0225451      | 0.0190544             | torch.Size([512, 2, 512])        |
| 647     | torch.Tensor.transpose                                                            | head.layers.0.attn                                | output              | torch.float32 |         | -0.6143208        | 0.4610773        | 0.0225451      | 0.0190544             | torch.Size([2, 512, 512])        |
| 648     | torch.nn.modules.dropout.Dropout                                                  | head.layers.0.dropout                             | input               | torch.float32 |         | -0.6143208        | 0.4610773        | 0.0225451      | 0.0190544             | torch.Size([2, 512, 512])        |
| 648     | torch.nn.modules.dropout.Dropout                                                  | head.layers.0.dropout                             | output              | torch.float32 |         | -0.6143208        | 0.4610773        | 0.0225451      | 0.0190544             | torch.Size([2, 512, 512])        |
| 649     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.0.add                                 | input_0             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0270392      | 0.4248533             | torch.Size([2, 512, 512])        |
| 649     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.0.add                                 | input_1             | torch.float32 |         | -0.6143208        | 0.4610773        | 0.0225451      | 0.0190544             | torch.Size([2, 512, 512])        |
| 649     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.0.add                                 | output              | torch.float32 |         | -1.2518053        | 7.8813105        | 0.0495843      | 0.3829227             | torch.Size([2, 512, 512])        |
| 650     | torch.nn.modules.linear.Linear                                                    | head.fc_after                                     | input               | torch.float32 |         | -1.2518053        | 7.8813105        | 0.0495843      | 0.3829227             | torch.Size([2, 512, 512])        |
| 650     | torch.nn.modules.linear.Linear                                                    | head.fc_after                                     | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 650     | torch.nn.modules.linear.Linear                                                    | head.fc_after                                     | output              | torch.float32 |         | -6.8353300        | 4.6332374        | -0.0157804     | 0.6241359             | torch.Size([2, 512, 256])        |
| 651     | torch.nn.modules.linear.Linear                                                    | head.fc_before(1)                                 | input               | torch.float32 |         | -6.8353300        | 4.6332374        | -0.0157804     | 0.6241359             | torch.Size([2, 512, 256])        |
| 651     | torch.nn.modules.linear.Linear                                                    | head.fc_before(1)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 651     | torch.nn.modules.linear.Linear                                                    | head.fc_before(1)                                 | output              | torch.float32 |         | -4.4913611        | 3.9210269        | 0.0028262      | 0.0364259             | torch.Size([2, 512, 512])        |
| 652     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.1.query_cat                           | input_0             | torch.float32 |         | -6.8353300        | 4.6332374        | -0.0157804     | 0.6241359             | torch.Size([2, 512, 256])        |
| 652     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.1.query_cat                           | input_1             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0540784      | 0.8482460             | torch.Size([2, 512, 256])        |
| 652     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.1.query_cat                           | output              | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([2, 512, 512])        |
| 653     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.1.key_cat                             | input_0             | torch.float32 |         | -6.8353300        | 4.6332374        | -0.0157804     | 0.6241359             | torch.Size([2, 512, 256])        |
| 653     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.1.key_cat                             | input_1             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0540784      | 0.8482460             | torch.Size([2, 512, 256])        |
| 653     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.1.key_cat                             | output              | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([2, 512, 512])        |
| 654     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([2, 512, 512])        |
| 654     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 655     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([2, 512, 512])        |
| 655     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 656     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -4.4913611        | 3.9210269        | 0.0028262      | 0.0364259             | torch.Size([2, 512, 512])        |
| 656     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -4.4913611        | 3.9210269        | 0.0028262      | 0.0364259             | torch.Size([512, 2, 512])        |
| 657     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | input_0             | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 657     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | output              | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 658     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | input_0             | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 658     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | output              | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 659     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | input_0             | torch.float32 |         | -4.4913611        | 3.9210269        | 0.0028262      | 0.0364259             | torch.Size([512, 2, 512])        |
| 659     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | output              | torch.float32 |         | -4.4913611        | 3.9210269        | 0.0028262      | 0.0364259             | torch.Size([512, 2, 512])        |
| 660     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.q_proj                         | input               | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 660     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.q_proj                         | weight              | torch.float32 |         | -0.6016091        | 0.5586885        | -0.0000813     | 0.0032236             | torch.Size([512, 512])           |
| 660     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.q_proj                         | bias                | torch.float32 |         | -0.1268833        | 0.1088683        | -0.0012884     | 0.0012951             | torch.Size([512])                |
| 660     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.q_proj                         | output              | torch.float32 |         | -13.9657841       | 13.2377052       | -0.0186527     | 13.3632402            | torch.Size([512, 2, 512])        |
| 661     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.k_proj                         | input               | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([512, 2, 512])        |
| 661     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.k_proj                         | weight              | torch.float32 |         | -0.3812873        | 0.4850378        | -0.0000840     | 0.0033379             | torch.Size([512, 512])           |
| 661     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.k_proj                         | bias                | torch.float32 |         | -0.0197120        | 0.0165953        | -0.0001635     | 0.0000225             | torch.Size([512])                |
| 661     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.k_proj                         | output              | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([512, 2, 512])        |
| 662     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.v_proj                         | input               | torch.float32 |         | -4.4913611        | 3.9210269        | 0.0028262      | 0.0364259             | torch.Size([512, 2, 512])        |
| 662     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.v_proj                         | weight              | torch.float32 |         | -0.1545264        | 0.1564725        | -0.0000621     | 0.0008573             | torch.Size([512, 512])           |
| 662     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.v_proj                         | bias                | torch.float32 |         | -0.1773102        | 0.2198186        | 0.0024783      | 0.0030017             | torch.Size([512])                |
| 662     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.v_proj                         | output              | torch.float32 |         | -1.5744855        | 1.0361440        | 0.0023639      | 0.0286445             | torch.Size([512, 2, 512])        |
| 663     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | input_0             | torch.float32 |         | -13.9657841       | 13.2377052       | -0.0186527     | 13.3632402            | torch.Size([512, 2, 512])        |
| 663     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | output              | torch.float32 |         | -13.9657841       | 13.2377052       | -0.0186527     | 13.3632402            | torch.Size([512, 16, 64])        |
| 664     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -13.9657841       | 13.2377052       | -0.0186527     | 13.3632402            | torch.Size([512, 16, 64])        |
| 664     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -13.9657841       | 13.2377052       | -0.0186527     | 13.3632402            | torch.Size([16, 512, 64])        |
| 665     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | input_0             | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([512, 2, 512])        |
| 665     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | output              | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([512, 16, 64])        |
| 666     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([512, 16, 64])        |
| 666     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([16, 512, 64])        |
| 667     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | input_0             | torch.float32 |         | -1.5744855        | 1.0361440        | 0.0023639      | 0.0286445             | torch.Size([512, 2, 512])        |
| 667     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | output              | torch.float32 |         | -1.5744855        | 1.0361440        | 0.0023639      | 0.0286445             | torch.Size([512, 16, 64])        |
| 668     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -1.5744855        | 1.0361440        | 0.0023639      | 0.0286445             | torch.Size([512, 16, 64])        |
| 668     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -1.5744855        | 1.0361440        | 0.0023639      | 0.0286445             | torch.Size([16, 512, 64])        |
| 669     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.1.attn.q_scale_mul                    | input_0             | torch.float32 |         | -13.9657841       | 13.2377052       | -0.0186527     | 13.3632402            | torch.Size([16, 512, 64])        |
| 669     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.1.attn.q_scale_mul                    | output              | torch.float32 |         | -1.7457230        | 1.6547132        | -0.0023316     | 0.2088006             | torch.Size([16, 512, 64])        |
| 670     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([16, 512, 64])        |
| 670     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([16, 64, 512])        |
| 671     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.1.attn.matmul                         | input_0             | torch.float32 |         | -1.7457230        | 1.6547132        | -0.0023316     | 0.2088006             | torch.Size([16, 512, 64])        |
| 671     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.1.attn.matmul                         | input_1             | torch.float32 |         | -15.1474581       | 14.8608704       | 0.0160838      | 7.1239891             | torch.Size([16, 64, 512])        |
| 671     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.1.attn.matmul                         | output              | torch.float32 |         | -444.8888550      | 278.5216064      | -11.4457893    | 1074.4569092          | torch.Size([16, 512, 512])       |
| 672     | torch.Tensor.max                                                                  | head.layers.1.attn.softmax                        | input               | torch.float32 |         | -444.8888550      | 278.5216064      | -11.4457893    | 1074.4569092          | torch.Size([16, 512, 512])       |
| 672     | torch.Tensor.max                                                                  | head.layers.1.attn.softmax                        | output_0            | torch.float32 |         | -13.0455446       | 278.5216064      | 60.9335327     | 2213.5969238          | torch.Size([16, 512, 1])         |
| 672     | torch.Tensor.max                                                                  | head.layers.1.attn.softmax                        | output_1            | torch.int64   |         | 0.0000000         | 494.0000000      | 261.1508789    | 15278.9414062         | torch.Size([16, 512, 1])         |
| 673     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.1.attn.softmax.sub                    | input_0             | torch.float32 |         | -444.8888550      | 278.5216064      | -11.4457893    | 1074.4569092          | torch.Size([16, 512, 512])       |
| 673     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.1.attn.softmax.sub                    | input_1             | torch.float32 |         | -13.0455446       | 278.5216064      | 60.9335327     | 2213.5969238          | torch.Size([16, 512, 1])         |
| 673     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.1.attn.softmax.sub                    | output              | torch.float32 |         | -648.1047974      | 0.0000000        | -72.3793182    | 2855.7226562          | torch.Size([16, 512, 512])       |
| 674     | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.1.attn.softmax.exp                    | input               | torch.float32 |         | -648.1047974      | 0.0000000        | -72.3793182    | 2855.7226562          | torch.Size([16, 512, 512])       |
| 674     | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.1.attn.softmax.exp                    | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0261821      | 0.0251499             | torch.Size([16, 512, 512])       |
| 675     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.1.attn.softmax.sum                    | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0261821      | 0.0251499             | torch.Size([16, 512, 512])       |
| 675     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.1.attn.softmax.sum                    | output              | torch.float32 |         | 1.0000000         | 131.7462463      | 13.4052410     | 1388.6071777          | torch.Size([16, 512, 1])         |
| 676     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.1.attn.softmax.reciprocal             | input               | torch.float32 |         | 1.0000000         | 131.7462463      | 13.4052410     | 1388.6071777          | torch.Size([16, 512, 1])         |
| 676     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.1.attn.softmax.reciprocal             | output              | torch.float32 |         | 0.0075903         | 1.0000000        | 0.7940849      | 0.1044719             | torch.Size([16, 512, 1])         |
| 677     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.1.attn.softmax.mul                    | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0261821      | 0.0251499             | torch.Size([16, 512, 512])       |
| 677     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.1.attn.softmax.mul                    | input_1             | torch.float32 |         | 0.0075903         | 1.0000000        | 0.7940849      | 0.1044719             | torch.Size([16, 512, 1])         |
| 677     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.1.attn.softmax.mul                    | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0014674             | torch.Size([16, 512, 512])       |
| 678     | torch.nn.modules.dropout.Dropout                                                  | head.layers.1.attn.attention_drop                 | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0014674             | torch.Size([16, 512, 512])       |
| 678     | torch.nn.modules.dropout.Dropout                                                  | head.layers.1.attn.attention_drop                 | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0014674             | torch.Size([16, 512, 512])       |
| 679     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.1.attn.attn_matmul                    | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0014674             | torch.Size([16, 512, 512])       |
| 679     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.1.attn.attn_matmul                    | input_1             | torch.float32 |         | -1.5744855        | 1.0361440        | 0.0023639      | 0.0286445             | torch.Size([16, 512, 64])        |
| 679     | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.1.attn.attn_matmul                    | output              | torch.float32 |         | -1.5744829        | 0.7760016        | -0.0082111     | 0.0327196             | torch.Size([16, 512, 64])        |
| 680     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -1.5744829        | 0.7760016        | -0.0082111     | 0.0327196             | torch.Size([16, 512, 64])        |
| 680     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -1.5744829        | 0.7760016        | -0.0082111     | 0.0327196             | torch.Size([512, 16, 64])        |
| 681     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | input_0             | torch.float32 |         | -1.5744829        | 0.7760016        | -0.0082111     | 0.0327196             | torch.Size([512, 16, 64])        |
| 681     | torch.Tensor.reshape                                                              | head.layers.1.attn                                | output              | torch.float32 |         | -1.5744829        | 0.7760016        | -0.0082111     | 0.0327196             | torch.Size([512, 2, 512])        |
| 682     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.out_proj                       | input               | torch.float32 |         | -1.5744829        | 0.7760016        | -0.0082111     | 0.0327196             | torch.Size([512, 2, 512])        |
| 682     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.out_proj                       | weight              | torch.float32 |         | -0.1796134        | 0.1793741        | 0.0000376      | 0.0020221             | torch.Size([512, 512])           |
| 682     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.out_proj                       | bias                | torch.float32 |         | -0.3707179        | 0.3755981        | -0.0065476     | 0.0208958             | torch.Size([512])                |
| 682     | torch.nn.modules.linear.Linear                                                    | head.layers.1.attn.out_proj                       | output              | torch.float32 |         | -1.4716967        | 1.2127035        | 0.0119910      | 0.1097871             | torch.Size([512, 2, 512])        |
| 683     | torch.Tensor.view                                                                 | head.layers.1.attn                                | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0014674             | torch.Size([16, 512, 512])       |
| 683     | torch.Tensor.view                                                                 | head.layers.1.attn                                | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0014674             | torch.Size([2, 8, 512, 512])     |
| 684     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.1.attn.attn_weights_mean              | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0014674             | torch.Size([2, 8, 512, 512])     |
| 684     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.1.attn.attn_weights_mean              | output              | torch.float32 |         | 0.0000000         | 0.4864796        | 0.0019531      | 0.0002346             | torch.Size([2, 512, 512])        |
| 685     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | input_0             | torch.float32 |         | -1.4716967        | 1.2127035        | 0.0119910      | 0.1097871             | torch.Size([512, 2, 512])        |
| 685     | torch.Tensor.transpose                                                            | head.layers.1.attn                                | output              | torch.float32 |         | -1.4716967        | 1.2127035        | 0.0119910      | 0.1097871             | torch.Size([2, 512, 512])        |
| 686     | torch.nn.modules.dropout.Dropout                                                  | head.layers.1.dropout                             | input               | torch.float32 |         | -1.4716967        | 1.2127035        | 0.0119910      | 0.1097871             | torch.Size([2, 512, 512])        |
| 686     | torch.nn.modules.dropout.Dropout                                                  | head.layers.1.dropout                             | output              | torch.float32 |         | -1.4716967        | 1.2127035        | 0.0119910      | 0.1097871             | torch.Size([2, 512, 512])        |
| 687     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.1.add                                 | input_0             | torch.float32 |         | -6.8353300        | 7.9945936        | 0.0191490      | 0.7374097             | torch.Size([2, 512, 512])        |
| 687     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.1.add                                 | input_1             | torch.float32 |         | -1.4716967        | 1.2127035        | 0.0119910      | 0.1097871             | torch.Size([2, 512, 512])        |
| 687     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.1.add                                 | output              | torch.float32 |         | -6.7379370        | 7.9337821        | 0.0311400      | 0.8964915             | torch.Size([2, 512, 512])        |
| 688     | torch.nn.modules.linear.Linear                                                    | head.fc_after(1)                                  | input               | torch.float32 |         | -6.7379370        | 7.9337821        | 0.0311400      | 0.8964915             | torch.Size([2, 512, 512])        |
| 688     | torch.nn.modules.linear.Linear                                                    | head.fc_after(1)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 688     | torch.nn.modules.linear.Linear                                                    | head.fc_after(1)                                  | output              | torch.float32 |         | -38.7241287       | 29.3965588       | 0.0476169      | 12.6816130            | torch.Size([2, 512, 256])        |
| 689     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.2.input_mean.mean                     | input_0             | torch.float32 |         | -38.7241287       | 29.3965588       | 0.0476169      | 12.6816130            | torch.Size([2, 512, 256])        |
| 689     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.2.input_mean.mean                     | output              | torch.float32 |         | -0.0505157        | 0.1147464        | 0.0476169      | 0.0018962             | torch.Size([2, 512, 1])          |
| 690     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.2.sub                                 | input_0             | torch.float32 |         | -38.7241287       | 29.3965588       | 0.0476169      | 12.6816130            | torch.Size([2, 512, 256])        |
| 690     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.2.sub                                 | input_1             | torch.float32 |         | -0.0505157        | 0.1147464        | 0.0476169      | 0.0018962             | torch.Size([2, 512, 1])          |
| 690     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.2.sub                                 | output              | torch.float32 |         | -38.7809601       | 29.3397274       | 0.0000000      | 12.6797190            | torch.Size([2, 512, 256])        |
| 691     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.mul                                 | input_0             | torch.float32 |         | -38.7809601       | 29.3397274       | 0.0000000      | 12.6797190            | torch.Size([2, 512, 256])        |
| 691     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.mul                                 | input_1             | torch.float32 |         | -38.7809601       | 29.3397274       | 0.0000000      | 12.6797190            | torch.Size([2, 512, 256])        |
| 691     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.mul                                 | output              | torch.float32 |         | 0.0000000         | 1503.9628906     | 12.6796703     | 5096.3715820          | torch.Size([2, 512, 256])        |
| 692     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.2.var_mean.mean                       | input_0             | torch.float32 |         | 0.0000000         | 1503.9628906     | 12.6796703     | 5096.3715820          | torch.Size([2, 512, 256])        |
| 692     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.2.var_mean.mean                       | output              | torch.float32 |         | 7.0969419         | 24.7593689       | 12.6796703     | 21.1510277            | torch.Size([2, 512, 1])          |
| 693     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.2.rsqrt                               | input               | torch.float32 |         | 7.0969419         | 24.7593689       | 12.6796703     | 21.1510277            | torch.Size([2, 512, 1])          |
| 693     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.2.rsqrt                               | output              | torch.float32 |         | 0.2009695         | 0.3753739        | 0.2955208      | 0.0030172             | torch.Size([2, 512, 1])          |
| 694     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.out_mul                             | input_0             | torch.float32 |         | -38.7809601       | 29.3397274       | 0.0000000      | 12.6797190            | torch.Size([2, 512, 256])        |
| 694     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.out_mul                             | input_1             | torch.float32 |         | 0.2009695         | 0.3753739        | 0.2955208      | 0.0030172             | torch.Size([2, 512, 1])          |
| 694     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.out_mul                             | output              | torch.float32 |         | -7.9786782        | 6.1649113        | -0.0000000     | 1.0000030             | torch.Size([2, 512, 256])        |
| 695     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.2.weight_quant                        | input               | torch.float32 |         | 0.7212925         | 1.0280097        | 0.8725660      | 0.0030677             | torch.Size([256])                |
| 695     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.2.weight_quant                        | output              | torch.float32 |         | 0.7212925         | 1.0280097        | 0.8725660      | 0.0030677             | torch.Size([256])                |
| 696     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.weight_mul                          | input_0             | torch.float32 |         | -7.9786782        | 6.1649113        | -0.0000000     | 1.0000030             | torch.Size([2, 512, 256])        |
| 696     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.weight_mul                          | input_1             | torch.float32 |         | 0.7212925         | 1.0280097        | 0.8725660      | 0.0030677             | torch.Size([256])                |
| 696     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.2.weight_mul                          | output              | torch.float32 |         | -7.2269816        | 5.3082790        | -0.0017775     | 0.8024370             | torch.Size([2, 512, 256])        |
| 697     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.2.bias_quant                          | input               | torch.float32 |         | -0.1147615        | 0.1351990        | 0.0041992      | 0.0017473             | torch.Size([256])                |
| 697     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.2.bias_quant                          | output              | torch.float32 |         | -0.1147615        | 0.1351990        | 0.0041992      | 0.0017473             | torch.Size([256])                |
| 698     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.2.bias_add                            | input_0             | torch.float32 |         | -7.2269816        | 5.3082790        | -0.0017775     | 0.8024370             | torch.Size([2, 512, 256])        |
| 698     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.2.bias_add                            | input_1             | torch.float32 |         | -0.1147615        | 0.1351990        | 0.0041992      | 0.0017473             | torch.Size([256])                |
| 698     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.2.bias_add                            | output              | torch.float32 |         | -7.2067256        | 5.2103510        | 0.0024216      | 0.7922981             | torch.Size([2, 512, 256])        |
| 699     | torch.nn.modules.linear.Linear                                                    | head.layers.3.kps_generator.offset                | input               | torch.float32 |         | -7.2067256        | 5.2103510        | 0.0024216      | 0.7922981             | torch.Size([2, 512, 256])        |
| 699     | torch.nn.modules.linear.Linear                                                    | head.layers.3.kps_generator.offset                | weight              | torch.float32 |         | -0.3113222        | 0.3088498        | -0.0000128     | 0.0058743             | torch.Size([24, 256])            |
| 699     | torch.nn.modules.linear.Linear                                                    | head.layers.3.kps_generator.offset                | bias                | torch.float32 |         | -0.1541595        | 0.0698048        | -0.0043113     | 0.0048043             | torch.Size([24])                 |
| 699     | torch.nn.modules.linear.Linear                                                    | head.layers.3.kps_generator.offset                | output              | torch.float32 |         | -17.6597633       | 10.6283236       | -0.9466812     | 13.9456062            | torch.Size([2, 512, 24])         |
| 700     | torch.Tensor.view                                                                 | head.layers.3.kps_generator                       | input_0             | torch.float32 |         | -17.6597633       | 10.6283236       | -0.9466812     | 13.9456062            | torch.Size([2, 512, 24])         |
| 700     | torch.Tensor.view                                                                 | head.layers.3.kps_generator                       | output              | torch.float32 |         | -17.6597633       | 10.6283236       | -0.9466812     | 13.9456062            | torch.Size([2, 512, 8, 3])       |
| 701     | torch.Tensor.__getitem__                                                          | head.layers.3.kps_generator                       | input_0             | torch.float32 |         | -52.9582825       | 52.8438606       | 0.4784662      | 77.4394913            | torch.Size([2, 512, 11])         |
| 701     | torch.Tensor.__getitem__                                                          | head.layers.3.kps_generator                       | output              | torch.float32 |         | -52.9582825       | 52.8438606       | 1.0650947      | 283.1617432           | torch.Size([2, 512, 1, 3])       |
| 702     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.kps_generator.keypoints_add         | input_0             | torch.float32 |         | -17.6597633       | 10.6283236       | -0.9466812     | 13.9456062            | torch.Size([2, 512, 8, 3])       |
| 702     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.kps_generator.keypoints_add         | input_1             | torch.float32 |         | -52.9582825       | 52.8438606       | 1.0650947      | 283.1617432           | torch.Size([2, 512, 1, 3])       |
| 702     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.kps_generator.keypoints_add         | output              | torch.float32 |         | -59.0253258       | 56.3024254       | 0.1184136      | 289.4903564           | torch.Size([2, 512, 8, 3])       |
| 703     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.weight_add                          | input_0             | torch.float32 |         | -7.2067256        | 5.2103510        | 0.0024216      | 0.7922981             | torch.Size([2, 512, 256])        |
| 703     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.weight_add                          | input_1             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0540784      | 0.8482460             | torch.Size([2, 512, 256])        |
| 703     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.weight_add                          | output              | torch.float32 |         | -7.5877662        | 8.3291559        | 0.0565001      | 1.5323919             | torch.Size([2, 512, 256])        |
| 704     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 704     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 705     | torch.Tensor.reshape                                                              | head.layers.3                                     | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 705     | torch.Tensor.reshape                                                              | head.layers.3                                     | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 706     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.0                    | input               | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 706     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.0                    | weight              | torch.float32 |         | -0.6545363        | 0.5989806        | -0.0019711     | 0.0136002             | torch.Size([256, 12])            |
| 706     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.0                    | bias                | torch.float32 |         | -0.3380467        | 0.3536568        | 0.0151805      | 0.0322619             | torch.Size([256])                |
| 706     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.0                    | output              | torch.float32 |         | -1.2576946        | 1.5937320        | 0.0170405      | 0.2737365             | torch.Size([2, 6, 256])          |
| 707     | torch.nn.modules.activation.ReLU                                                  | head.layers.3.camera_encoder.1                    | input               | torch.float32 |         | 0.0000000         | 1.5937320        | 0.2265511      | 0.1165193             | torch.Size([2, 6, 256])          |
| 707     | torch.nn.modules.activation.ReLU                                                  | head.layers.3.camera_encoder.1                    | output              | torch.float32 |         | 0.0000000         | 1.5937320        | 0.2265511      | 0.1165193             | torch.Size([2, 6, 256])          |
| 708     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.input_mean.mean    | input_0             | torch.float32 |         | 0.0000000         | 1.5937320        | 0.2265511      | 0.1165193             | torch.Size([2, 6, 256])          |
| 708     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.input_mean.mean    | output              | torch.float32 |         | 0.1656024         | 0.2490330        | 0.2265511      | 0.0008505             | torch.Size([2, 6, 1])            |
| 709     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.2.sub                | input_0             | torch.float32 |         | 0.0000000         | 1.5937320        | 0.2265511      | 0.1165193             | torch.Size([2, 6, 256])          |
| 709     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.2.sub                | input_1             | torch.float32 |         | 0.1656024         | 0.2490330        | 0.2265511      | 0.0008505             | torch.Size([2, 6, 1])            |
| 709     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.2.sub                | output              | torch.float32 |         | -0.2490330        | 1.3565958        | 0.0000000      | 0.1157394             | torch.Size([2, 6, 256])          |
| 710     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.mul                | input_0             | torch.float32 |         | -0.2490330        | 1.3565958        | 0.0000000      | 0.1157394             | torch.Size([2, 6, 256])          |
| 710     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.mul                | input_1             | torch.float32 |         | -0.2490330        | 1.3565958        | 0.0000000      | 0.1157394             | torch.Size([2, 6, 256])          |
| 710     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.mul                | output              | torch.float32 |         | 0.0000004         | 1.8403521        | 0.1157018      | 0.0443429             | torch.Size([2, 6, 256])          |
| 711     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.var_mean.mean      | input_0             | torch.float32 |         | 0.0000004         | 1.8403521        | 0.1157018      | 0.0443429             | torch.Size([2, 6, 256])          |
| 711     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.2.var_mean.mean      | output              | torch.float32 |         | 0.0607374         | 0.1419687        | 0.1157018      | 0.0007492             | torch.Size([2, 6, 1])            |
| 712     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.3.camera_encoder.2.rsqrt              | input               | torch.float32 |         | 0.0607374         | 0.1419687        | 0.1157018      | 0.0007492             | torch.Size([2, 6, 1])            |
| 712     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.3.camera_encoder.2.rsqrt              | output              | torch.float32 |         | 2.6539233         | 4.0572920        | 3.0207267      | 0.2346750             | torch.Size([2, 6, 1])            |
| 713     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.out_mul            | input_0             | torch.float32 |         | -0.2490330        | 1.3565958        | 0.0000000      | 0.1157394             | torch.Size([2, 6, 256])          |
| 713     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.out_mul            | input_1             | torch.float32 |         | 2.6539233         | 4.0572920        | 3.0207267      | 0.2346750             | torch.Size([2, 6, 1])            |
| 713     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.out_mul            | output              | torch.float32 |         | -0.6952446        | 3.8896844        | -0.0000000     | 1.0002321             | torch.Size([2, 6, 256])          |
| 714     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.2.weight_quant       | input               | torch.float32 |         | 0.8028511         | 1.1667448        | 0.9937689      | 0.0040703             | torch.Size([256])                |
| 714     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.2.weight_quant       | output              | torch.float32 |         | 0.8028511         | 1.1667448        | 0.9937689      | 0.0040703             | torch.Size([256])                |
| 715     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.weight_mul         | input_0             | torch.float32 |         | -0.6952446        | 3.8896844        | -0.0000000     | 1.0002321             | torch.Size([2, 6, 256])          |
| 715     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.weight_mul         | input_1             | torch.float32 |         | 0.8028511         | 1.1667448        | 0.9937689      | 0.0040703             | torch.Size([256])                |
| 715     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.2.weight_mul         | output              | torch.float32 |         | -0.7840746        | 4.0872602        | -0.0056729     | 1.0214678             | torch.Size([2, 6, 256])          |
| 716     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.2.bias_quant         | input               | torch.float32 |         | -0.1349080        | 0.1125814        | -0.0114335     | 0.0026644             | torch.Size([256])                |
| 716     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.2.bias_quant         | output              | torch.float32 |         | -0.1349080        | 0.1125814        | -0.0114335     | 0.0026644             | torch.Size([256])                |
| 717     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.2.bias_add           | input_0             | torch.float32 |         | -0.7840746        | 4.0872602        | -0.0056729     | 1.0214678             | torch.Size([2, 6, 256])          |
| 717     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.2.bias_add           | input_1             | torch.float32 |         | -0.1349080        | 0.1125814        | -0.0114335     | 0.0026644             | torch.Size([256])                |
| 717     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.2.bias_add           | output              | torch.float32 |         | -0.9150975        | 4.1722488        | -0.0171064     | 1.0790805             | torch.Size([2, 6, 256])          |
| 718     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.3                    | input               | torch.float32 |         | -0.9150975        | 4.1722488        | -0.0171064     | 1.0790805             | torch.Size([2, 6, 256])          |
| 718     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.3                    | weight              | torch.float32 |         | -0.4090023        | 0.4386477        | 0.0001596      | 0.0048304             | torch.Size([256, 256])           |
| 718     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.3                    | bias                | torch.float32 |         | -0.0807881        | 0.3063670        | -0.0007200     | 0.0023478             | torch.Size([256])                |
| 718     | torch.nn.modules.linear.Linear                                                    | head.layers.3.camera_encoder.3                    | output              | torch.float32 |         | -7.4711604        | 59.1860046       | -0.0801236     | 39.0631371            | torch.Size([2, 6, 256])          |
| 719     | torch.nn.modules.activation.ReLU                                                  | head.layers.3.camera_encoder.4                    | input               | torch.float32 |         | 0.0000000         | 59.1860046       | 1.3384842      | 34.2808266            | torch.Size([2, 6, 256])          |
| 719     | torch.nn.modules.activation.ReLU                                                  | head.layers.3.camera_encoder.4                    | output              | torch.float32 |         | 0.0000000         | 59.1860046       | 1.3384842      | 34.2808266            | torch.Size([2, 6, 256])          |
| 720     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.input_mean.mean    | input_0             | torch.float32 |         | 0.0000000         | 59.1860046       | 1.3384842      | 34.2808266            | torch.Size([2, 6, 256])          |
| 720     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.input_mean.mean    | output              | torch.float32 |         | 1.3050106         | 1.3903527        | 1.3384842      | 0.0008021             | torch.Size([2, 6, 1])            |
| 721     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.5.sub                | input_0             | torch.float32 |         | 0.0000000         | 59.1860046       | 1.3384842      | 34.2808266            | torch.Size([2, 6, 256])          |
| 721     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.5.sub                | input_1             | torch.float32 |         | 1.3050106         | 1.3903527        | 1.3384842      | 0.0008021             | torch.Size([2, 6, 1])            |
| 721     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.camera_encoder.5.sub                | output              | torch.float32 |         | -1.3903527        | 57.7956505       | 0.0000000      | 34.2800865            | torch.Size([2, 6, 256])          |
| 722     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.mul                | input_0             | torch.float32 |         | -1.3903527        | 57.7956505       | 0.0000000      | 34.2800865            | torch.Size([2, 6, 256])          |
| 722     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.mul                | input_1             | torch.float32 |         | -1.3903527        | 57.7956505       | 0.0000000      | 34.2800865            | torch.Size([2, 6, 256])          |
| 722     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.mul                | output              | torch.float32 |         | 0.0000067         | 3340.3371582     | 34.2689285     | 76678.1406250         | torch.Size([2, 6, 256])          |
| 723     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.var_mean.mean      | input_0             | torch.float32 |         | 0.0000067         | 3340.3371582     | 34.2689285     | 76678.1406250         | torch.Size([2, 6, 256])          |
| 723     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.3.camera_encoder.5.var_mean.mean      | output              | torch.float32 |         | 33.2077599        | 35.6957207       | 34.2689323     | 0.6936293             | torch.Size([2, 6, 1])            |
| 724     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.3.camera_encoder.5.rsqrt              | input               | torch.float32 |         | 33.2077599        | 35.6957207       | 34.2689323     | 0.6936293             | torch.Size([2, 6, 1])            |
| 724     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.3.camera_encoder.5.rsqrt              | output              | torch.float32 |         | 0.1673755         | 0.1735322        | 0.1708589      | 0.0000043             | torch.Size([2, 6, 1])            |
| 725     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.out_mul            | input_0             | torch.float32 |         | -1.3903527        | 57.7956505       | 0.0000000      | 34.2800865            | torch.Size([2, 6, 256])          |
| 725     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.out_mul            | input_1             | torch.float32 |         | 0.1673755         | 0.1735322        | 0.1708589      | 0.0000043             | torch.Size([2, 6, 1])            |
| 725     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.out_mul            | output              | torch.float32 |         | -0.2336953        | 9.8104343        | -0.0000000     | 1.0003253             | torch.Size([2, 6, 256])          |
| 726     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.5.weight_quant       | input               | torch.float32 |         | 0.5028567         | 1.4622601        | 0.8814595      | 0.0321079             | torch.Size([256])                |
| 726     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.5.weight_quant       | output              | torch.float32 |         | 0.5028567         | 1.4622601        | 0.8814595      | 0.0321079             | torch.Size([256])                |
| 727     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.weight_mul         | input_0             | torch.float32 |         | -0.2336953        | 9.8104343        | -0.0000000     | 1.0003253             | torch.Size([2, 6, 256])          |
| 727     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.weight_mul         | input_1             | torch.float32 |         | 0.5028567         | 1.4622601        | 0.8814595      | 0.0321079             | torch.Size([256])                |
| 727     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.camera_encoder.5.weight_mul         | output              | torch.float32 |         | -0.3417233        | 7.4744577        | -0.0258637     | 0.5567260             | torch.Size([2, 6, 256])          |
| 728     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.5.bias_quant         | input               | torch.float32 |         | -0.5241177        | 0.5032777        | 0.0442741      | 0.0375308             | torch.Size([256])                |
| 728     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.camera_encoder.5.bias_quant         | output              | torch.float32 |         | -0.5241177        | 0.5032777        | 0.0442741      | 0.0375308             | torch.Size([256])                |
| 729     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.5.bias_add           | input_0             | torch.float32 |         | -0.3417233        | 7.4744577        | -0.0258637     | 0.5567260             | torch.Size([2, 6, 256])          |
| 729     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.5.bias_add           | input_1             | torch.float32 |         | -0.5241177        | 0.5032777        | 0.0442741      | 0.0375308             | torch.Size([256])                |
| 729     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.camera_encoder.5.bias_add           | output              | torch.float32 |         | -0.8568722        | 7.4137011        | 0.0184104      | 0.5428293             | torch.Size([2, 6, 256])          |
| 730     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -7.5877662        | 8.3291559        | 0.0565001      | 1.5323919             | torch.Size([2, 512, 256])        |
| 730     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -7.5877662        | 8.3291559        | 0.0565001      | 1.5323919             | torch.Size([2, 512, 1, 256])     |
| 731     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -0.8568722        | 7.4137011        | 0.0184104      | 0.5428293             | torch.Size([2, 6, 256])          |
| 731     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -0.8568722        | 7.4137011        | 0.0184104      | 0.5428293             | torch.Size([2, 1, 6, 256])       |
| 732     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.cam_add                             | input_0             | torch.float32 |         | -7.5877662        | 8.3291559        | 0.0565001      | 1.5323919             | torch.Size([2, 512, 1, 256])     |
| 732     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.cam_add                             | input_1             | torch.float32 |         | -0.8568722        | 7.4137011        | 0.0184104      | 0.5428293             | torch.Size([2, 1, 6, 256])       |
| 732     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.3.cam_add                             | output              | torch.float32 |         | -5.3444147        | 7.6638184        | 0.0749104      | 1.1015700             | torch.Size([2, 512, 6, 256])     |
| 733     | torch.nn.modules.linear.Linear                                                    | head.layers.3.weights_fc                          | input               | torch.float32 |         | -5.3444147        | 7.6638184        | 0.0749104      | 1.1015700             | torch.Size([2, 512, 6, 256])     |
| 733     | torch.nn.modules.linear.Linear                                                    | head.layers.3.weights_fc                          | weight              | torch.float32 |         | -0.4302702        | 0.3039190        | -0.0007312     | 0.0026000             | torch.Size([64, 256])            |
| 733     | torch.nn.modules.linear.Linear                                                    | head.layers.3.weights_fc                          | bias                | torch.float32 |         | -0.0972505        | 0.0706504        | 0.0092854      | 0.0013785             | torch.Size([64])                 |
| 733     | torch.nn.modules.linear.Linear                                                    | head.layers.3.weights_fc                          | output              | torch.float32 |         | -6.8998957        | 7.0770836        | 0.3934667      | 5.1567459             | torch.Size([2, 512, 6, 64])      |
| 734     | torch.Tensor.reshape                                                              | head.layers.3                                     | input_0             | torch.float32 |         | -6.8998957        | 7.0770836        | 0.3934667      | 5.1567459             | torch.Size([2, 512, 6, 64])      |
| 734     | torch.Tensor.reshape                                                              | head.layers.3                                     | output              | torch.float32 |         | -6.8998957        | 7.0770836        | 0.3934667      | 5.1567459             | torch.Size([2, 512, 48, 8])      |
| 735     | torch.Tensor.max                                                                  | head.layers.3.weight_softmax                      | input               | torch.float32 |         | -6.8998957        | 7.0770836        | 0.3934667      | 5.1567459             | torch.Size([2, 512, 48, 8])      |
| 735     | torch.Tensor.max                                                                  | head.layers.3.weight_softmax                      | output_0            | torch.float32 |         | 1.4814475         | 7.0770836        | 3.6450174      | 1.0064299             | torch.Size([2, 512, 1, 8])       |
| 735     | torch.Tensor.max                                                                  | head.layers.3.weight_softmax                      | output_1            | torch.int64   |         | 1.0000000         | 46.0000000       | 26.8912354     | 132.4375610           | torch.Size([2, 512, 1, 8])       |
| 736     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.weight_softmax.sub                  | input_0             | torch.float32 |         | -6.8998957        | 7.0770836        | 0.3934667      | 5.1567459             | torch.Size([2, 512, 48, 8])      |
| 736     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.weight_softmax.sub                  | input_1             | torch.float32 |         | 1.4814475         | 7.0770836        | 3.6450174      | 1.0064299             | torch.Size([2, 512, 1, 8])       |
| 736     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.3.weight_softmax.sub                  | output              | torch.float32 |         | -11.0972805       | 0.0000000        | -3.2515507     | 5.8083048             | torch.Size([2, 512, 48, 8])      |
| 737     | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.3.weight_softmax.exp                  | input               | torch.float32 |         | -11.0972805       | 0.0000000        | -3.2515507     | 5.8083048             | torch.Size([2, 512, 48, 8])      |
| 737     | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.3.weight_softmax.exp                  | output              | torch.float32 |         | 0.0000152         | 1.0000000        | 0.2294937      | 0.1045396             | torch.Size([2, 512, 48, 8])      |
| 738     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.3.weight_softmax.sum                  | input               | torch.float32 |         | 0.0000152         | 1.0000000        | 0.2294937      | 0.1045396             | torch.Size([2, 512, 48, 8])      |
| 738     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.3.weight_softmax.sum                  | output              | torch.float32 |         | 5.5269437         | 28.9658928       | 11.0156975     | 13.6788511            | torch.Size([2, 512, 1, 8])       |
| 739     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.3.weight_softmax.reciprocal           | input               | torch.float32 |         | 5.5269437         | 28.9658928       | 11.0156975     | 13.6788511            | torch.Size([2, 512, 1, 8])       |
| 739     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.3.weight_softmax.reciprocal           | output              | torch.float32 |         | 0.0345234         | 0.1809318        | 0.1003734      | 0.0009360             | torch.Size([2, 512, 1, 8])       |
| 740     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.weight_softmax.mul                  | input_0             | torch.float32 |         | 0.0000152         | 1.0000000        | 0.2294937      | 0.1045396             | torch.Size([2, 512, 48, 8])      |
| 740     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.weight_softmax.mul                  | input_1             | torch.float32 |         | 0.0345234         | 0.1809318        | 0.1003734      | 0.0009360             | torch.Size([2, 512, 1, 8])       |
| 740     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.weight_softmax.mul                  | output              | torch.float32 |         | 0.0000017         | 0.1809318        | 0.0208333      | 0.0010363             | torch.Size([2, 512, 48, 8])      |
| 741     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -59.0253258       | 56.3024254       | 0.1184136      | 289.4903564           | torch.Size([2, 512, 8, 3])       |
| 741     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -59.0253258       | 53.2126541       | 0.1927917      | 322.0498962           | torch.Size([2, 512, 8, 1])       |
| 742     | torch.ones_like                                                                   | head.layers.3                                     | input               | torch.float32 |         | -59.0253258       | 53.2126541       | 0.1927917      | 322.0498962           | torch.Size([2, 512, 8, 1])       |
| 742     | torch.ones_like                                                                   | head.layers.3                                     | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 743     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.point_quant_stub                    | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 743     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.3.point_quant_stub                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 744     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.point_cat                           | input_0             | torch.float32 |         | -59.0253258       | 56.3024254       | 0.1184136      | 289.4903564           | torch.Size([2, 512, 8, 3])       |
| 744     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.point_cat                           | input_1             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 744     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.point_cat                           | output              | torch.float32 |         | -59.0253258       | 56.3024254       | 0.3388101      | 217.2613068           | torch.Size([2, 512, 8, 4])       |
| 745     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 745     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 746     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -59.0253258       | 56.3024254       | 0.3388101      | 217.2613068           | torch.Size([2, 512, 8, 4])       |
| 746     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -59.0253258       | 56.3024254       | 0.3388101      | 217.2613068           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 747     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.point_matmul                        | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 747     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.point_matmul                        | input_1             | torch.float32 |         | -59.0253258       | 56.3024254       | 0.3388101      | 217.2613068           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 747     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.point_matmul                        | output              | torch.float32 |         | -93.4312820       | 84.2303925       | 0.2115338      | 97.5851746            | torch.Size([2, 6, 512, 8, 4, 4]) |
| 748     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.3.point_sum                           | input               | torch.float32 |         | -93.4312820       | 84.2303925       | 0.2115338      | 97.5851746            | torch.Size([2, 6, 512, 8, 4, 4]) |
| 748     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.3.point_sum                           | output              | torch.float32 |         | -94.7208710       | 90.7924423       | 0.8461353      | 385.2588806           | torch.Size([2, 6, 512, 8, 4])    |
| 749     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -94.7208710       | 90.7924423       | 0.8461353      | 385.2588806           | torch.Size([2, 6, 512, 8, 4])    |
| 749     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -63.8730049       | 61.8238716       | -0.5946904     | 428.1226807           | torch.Size([2, 6, 512, 8, 1])    |
| 750     | torch.clamp                                                                       | head.layers.3                                     | input               | torch.float32 |         | -63.8730049       | 61.8238716       | -0.5946904     | 428.1226807           | torch.Size([2, 6, 512, 8, 1])    |
| 750     | torch.clamp                                                                       | head.layers.3                                     | output              | torch.float32 |         | 0.0000100         | 61.8238716       | 7.4454718      | 149.7758789           | torch.Size([2, 6, 512, 8, 1])    |
| 751     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.3.reciprocal_op                       | input               | torch.float32 |         | 0.0000100         | 61.8238716       | 7.4454718      | 149.7758789           | torch.Size([2, 6, 512, 8, 1])    |
| 751     | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.3.reciprocal_op                       | output              | torch.float32 |         | 0.0161750         | 100000.0000000   | 53315.6679688  | 2488320512.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 752     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | -94.7208710       | 90.7924423       | 0.8461353      | 385.2588806           | torch.Size([2, 6, 512, 8, 4])    |
| 752     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | -94.7208710       | 90.7924423       | 1.4896158      | 554.9987183           | torch.Size([2, 6, 512, 8, 2])    |
| 753     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.point_mul                           | input_0             | torch.float32 |         | -94.7208710       | 90.7924423       | 1.4896158      | 554.9987183           | torch.Size([2, 6, 512, 8, 2])    |
| 753     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.point_mul                           | input_1             | torch.float32 |         | 0.0161750         | 100000.0000000   | 53315.6679688  | 2488320512.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 753     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.point_mul                           | output              | torch.float32 |         | -9472087.0000000  | 9055228.0000000  | 236836.0625000 | 2857729523712.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 754     | torch.Tensor.flatten                                                              | head.layers.3                                     | input               | torch.float32 |         | -9472087.0000000  | 9055228.0000000  | 236836.0625000 | 2857729523712.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 754     | torch.Tensor.flatten                                                              | head.layers.3                                     | output              | torch.float32 |         | -9472087.0000000  | 9055228.0000000  | 236836.0625000 | 2857729523712.0000000 | torch.Size([12, 512, 8, 2])      |
| 755     | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.3                                     | input_0             | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 755     | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.3                                     | input_1             | torch.float32 |         | -9472087.0000000  | 9055228.0000000  | 236836.0625000 | 2857729523712.0000000 | torch.Size([12, 512, 8, 2])      |
| 755     | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.3                                     | output              | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([12, 256, 512, 8])    |
| 756     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.feat_cat                            | input               | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([12, 256, 512, 8])    |
| 756     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.feat_cat                            | output              | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([12, 256, 512, 8])    |
| 757     | torch.Tensor.view                                                                 | head.layers.3                                     | input_0             | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([12, 256, 512, 8])    |
| 757     | torch.Tensor.view                                                                 | head.layers.3                                     | output              | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 6, 256, 512, 8])  |
| 758     | torch.Tensor.permute                                                              | head.layers.3                                     | input_0             | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 6, 256, 512, 8])  |
| 758     | torch.Tensor.permute                                                              | head.layers.3                                     | output              | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 6, 8, 256])  |
| 759     | torch.Tensor.contiguous                                                           | head.layers.3                                     | input               | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 6, 8, 256])  |
| 759     | torch.Tensor.contiguous                                                           | head.layers.3                                     | output              | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 6, 8, 256])  |
| 760     | torch.Tensor.view                                                                 | head.layers.3                                     | input_0             | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 6, 8, 256])  |
| 760     | torch.Tensor.view                                                                 | head.layers.3                                     | output              | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 48, 256])    |
| 761     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | input_0             | torch.float32 |         | 0.0000017         | 0.1809318        | 0.0208333      | 0.0010363             | torch.Size([2, 512, 48, 8])      |
| 761     | torch.Tensor.__getitem__                                                          | head.layers.3                                     | output              | torch.float32 |         | 0.0000017         | 0.1809318        | 0.0208333      | 0.0010363             | torch.Size([2, 512, 48, 8, 1])   |
| 762     | torch.Tensor.reshape                                                              | head.layers.3                                     | input_0             | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 48, 256])    |
| 762     | torch.Tensor.reshape                                                              | head.layers.3                                     | output              | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 48, 8, 32])  |
| 763     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.feat_mul                            | input_0             | torch.float32 |         | 0.0000017         | 0.1809318        | 0.0208333      | 0.0010363             | torch.Size([2, 512, 48, 8, 1])   |
| 763     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.feat_mul                            | input_1             | torch.float32 |         | -38.2435112       | 30.6906338       | 0.0331884      | 3.1064525             | torch.Size([2, 512, 48, 8, 32])  |
| 763     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.3.feat_mul                            | output              | torch.float32 |         | -3.1031749        | 3.7050192        | 0.0005487      | 0.0039263             | torch.Size([2, 512, 48, 8, 32])  |
| 764     | torch.Tensor.view                                                                 | head.layers.3                                     | input_0             | torch.float32 |         | -3.1031749        | 3.7050192        | 0.0005487      | 0.0039263             | torch.Size([2, 512, 48, 8, 32])  |
| 764     | torch.Tensor.view                                                                 | head.layers.3                                     | output              | torch.float32 |         | -3.1031749        | 3.7050192        | 0.0005487      | 0.0039263             | torch.Size([2, 512, 48, 256])    |
| 765     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.3.feat_sum                            | input               | torch.float32 |         | -3.1031749        | 3.7050192        | 0.0005487      | 0.0039263             | torch.Size([2, 512, 48, 256])    |
| 765     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.3.feat_sum                            | output              | torch.float32 |         | -5.5484610        | 5.1631966        | 0.0263362      | 0.3585278             | torch.Size([2, 512, 256])        |
| 766     | torch.nn.modules.linear.Linear                                                    | head.layers.3.output_proj                         | input               | torch.float32 |         | -5.5484610        | 5.1631966        | 0.0263362      | 0.3585278             | torch.Size([2, 512, 256])        |
| 766     | torch.nn.modules.linear.Linear                                                    | head.layers.3.output_proj                         | weight              | torch.float32 |         | -0.2840032        | 0.2785434        | -0.0005137     | 0.0057385             | torch.Size([256, 256])           |
| 766     | torch.nn.modules.linear.Linear                                                    | head.layers.3.output_proj                         | bias                | torch.float32 |         | -0.0963255        | 0.0840218        | -0.0024079     | 0.0011414             | torch.Size([256])                |
| 766     | torch.nn.modules.linear.Linear                                                    | head.layers.3.output_proj                         | output              | torch.float32 |         | -6.9596372        | 8.6584492        | 0.0279104      | 0.8257834             | torch.Size([2, 512, 256])        |
| 767     | torch.nn.modules.dropout.Dropout                                                  | head.layers.3.proj_drop                           | input               | torch.float32 |         | -6.9596372        | 8.6584492        | 0.0279104      | 0.8257834             | torch.Size([2, 512, 256])        |
| 767     | torch.nn.modules.dropout.Dropout                                                  | head.layers.3.proj_drop                           | output              | torch.float32 |         | -6.9596372        | 8.6584492        | 0.0279104      | 0.8257834             | torch.Size([2, 512, 256])        |
| 768     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.residual_op                         | input_0             | torch.float32 |         | -6.9596372        | 8.6584492        | 0.0279104      | 0.8257834             | torch.Size([2, 512, 256])        |
| 768     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.residual_op                         | input_1             | torch.float32 |         | -7.2067256        | 5.2103510        | 0.0024216      | 0.7922981             | torch.Size([2, 512, 256])        |
| 768     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.3.residual_op                         | output              | torch.float32 |         | -7.2067256        | 8.6584492        | 0.0151660      | 0.8092017             | torch.Size([2, 512, 512])        |
| 769     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.input_mean.mean            | input_0             | torch.float32 |         | -7.2067256        | 8.6584492        | 0.0151660      | 0.8092017             | torch.Size([2, 512, 512])        |
| 769     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.input_mean.mean            | output              | torch.float32 |         | -0.0731751        | 0.0935997        | 0.0151660      | 0.0005534             | torch.Size([2, 512, 1])          |
| 770     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.4.pre_norm.sub                        | input_0             | torch.float32 |         | -7.2067256        | 8.6584492        | 0.0151660      | 0.8092017             | torch.Size([2, 512, 512])        |
| 770     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.4.pre_norm.sub                        | input_1             | torch.float32 |         | -0.0731751        | 0.0935997        | 0.0151660      | 0.0005534             | torch.Size([2, 512, 1])          |
| 770     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.4.pre_norm.sub                        | output              | torch.float32 |         | -7.2483706        | 8.6924133        | 0.0000000      | 0.8086488             | torch.Size([2, 512, 512])        |
| 771     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.mul                        | input_0             | torch.float32 |         | -7.2483706        | 8.6924133        | 0.0000000      | 0.8086488             | torch.Size([2, 512, 512])        |
| 771     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.mul                        | input_1             | torch.float32 |         | -7.2483706        | 8.6924133        | 0.0000000      | 0.8086488             | torch.Size([2, 512, 512])        |
| 771     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.mul                        | output              | torch.float32 |         | 0.0000000         | 75.5580521       | 0.8086473      | 9.1734629             | torch.Size([2, 512, 512])        |
| 772     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.var_mean.mean              | input_0             | torch.float32 |         | 0.0000000         | 75.5580521       | 0.8086473      | 9.1734629             | torch.Size([2, 512, 512])        |
| 772     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.4.pre_norm.var_mean.mean              | output              | torch.float32 |         | 0.4230029         | 2.1828780        | 0.8086473      | 0.0821056             | torch.Size([2, 512, 1])          |
| 773     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.4.pre_norm.rsqrt                      | input               | torch.float32 |         | 0.4230029         | 2.1828780        | 0.8086473      | 0.0821056             | torch.Size([2, 512, 1])          |
| 773     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.4.pre_norm.rsqrt                      | output              | torch.float32 |         | 0.6768373         | 1.5375285        | 1.1718585      | 0.0525496             | torch.Size([2, 512, 1])          |
| 774     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.out_mul                    | input_0             | torch.float32 |         | -7.2483706        | 8.6924133        | 0.0000000      | 0.8086488             | torch.Size([2, 512, 512])        |
| 774     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.out_mul                    | input_1             | torch.float32 |         | 0.6768373         | 1.5375285        | 1.1718585      | 0.0525496             | torch.Size([2, 512, 1])          |
| 774     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.out_mul                    | output              | torch.float32 |         | -10.4010572       | 7.5036807        | 0.0000000      | 0.9999876             | torch.Size([2, 512, 512])        |
| 775     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.4.pre_norm.weight_quant               | input               | torch.float32 |         | 0.7392406         | 1.6099653        | 1.0357572      | 0.0463764             | torch.Size([512])                |
| 775     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.4.pre_norm.weight_quant               | output              | torch.float32 |         | 0.7392406         | 1.6099653        | 1.0357572      | 0.0463764             | torch.Size([512])                |
| 776     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.weight_mul                 | input_0             | torch.float32 |         | -10.4010572       | 7.5036807        | 0.0000000      | 0.9999876             | torch.Size([2, 512, 512])        |
| 776     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.weight_mul                 | input_1             | torch.float32 |         | 0.7392406         | 1.6099653        | 1.0357572      | 0.0463764             | torch.Size([512])                |
| 776     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.4.pre_norm.weight_mul                 | output              | torch.float32 |         | -7.7848358        | 6.0853353        | 0.0037334      | 0.8001521             | torch.Size([2, 512, 512])        |
| 777     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.4.pre_norm.bias_quant                 | input               | torch.float32 |         | -0.2265132        | 0.2360181        | -0.0012928     | 0.0045628             | torch.Size([512])                |
| 777     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.4.pre_norm.bias_quant                 | output              | torch.float32 |         | -0.2265132        | 0.2360181        | -0.0012928     | 0.0045628             | torch.Size([512])                |
| 778     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.4.pre_norm.bias_add                   | input_0             | torch.float32 |         | -7.7848358        | 6.0853353        | 0.0037334      | 0.8001521             | torch.Size([2, 512, 512])        |
| 778     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.4.pre_norm.bias_add                   | input_1             | torch.float32 |         | -0.2265132        | 0.2360181        | -0.0012928     | 0.0045628             | torch.Size([512])                |
| 778     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.4.pre_norm.bias_add                   | output              | torch.float32 |         | -7.5735970        | 5.9011645        | 0.0024406      | 0.7901992             | torch.Size([2, 512, 512])        |
| 779     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.0.0                          | input               | torch.float32 |         | -7.5735970        | 5.9011645        | 0.0024406      | 0.7901992             | torch.Size([2, 512, 512])        |
| 779     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.0.0                          | weight              | torch.float32 |         | -0.5703671        | 0.6200907        | -0.0004717     | 0.0053330             | torch.Size([1024, 512])          |
| 779     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.0.0                          | bias                | torch.float32 |         | -0.2541566        | 0.0612331        | -0.0505678     | 0.0011895             | torch.Size([1024])               |
| 779     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.0.0                          | output              | torch.float32 |         | -20.6342030       | 8.9743080        | -3.5635376     | 9.1564274             | torch.Size([2, 512, 1024])       |
| 780     | torch.nn.modules.activation.ReLU                                                  | head.layers.4.activate                            | input               | torch.float32 |         | 0.0000000         | 8.9743080        | 0.1848018      | 0.5191449             | torch.Size([2, 512, 1024])       |
| 780     | torch.nn.modules.activation.ReLU                                                  | head.layers.4.activate                            | output              | torch.float32 |         | 0.0000000         | 8.9743080        | 0.1848018      | 0.5191449             | torch.Size([2, 512, 1024])       |
| 781     | torch.nn.modules.dropout.Dropout                                                  | head.layers.4.layers.0.2                          | input               | torch.float32 |         | 0.0000000         | 8.9743080        | 0.1848018      | 0.5191449             | torch.Size([2, 512, 1024])       |
| 781     | torch.nn.modules.dropout.Dropout                                                  | head.layers.4.layers.0.2                          | output              | torch.float32 |         | 0.0000000         | 8.9743080        | 0.1848018      | 0.5191449             | torch.Size([2, 512, 1024])       |
| 782     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.1                            | input               | torch.float32 |         | 0.0000000         | 8.9743080        | 0.1848018      | 0.5191449             | torch.Size([2, 512, 1024])       |
| 782     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.1                            | weight              | torch.float32 |         | -0.5260783        | 0.6165652        | 0.0003563      | 0.0055565             | torch.Size([256, 1024])          |
| 782     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.1                            | bias                | torch.float32 |         | -0.1731907        | 0.1124924        | 0.0009047      | 0.0009486             | torch.Size([256])                |
| 782     | torch.nn.modules.linear.Linear                                                    | head.layers.4.layers.1                            | output              | torch.float32 |         | -18.1953793       | 15.6566381       | 0.0967204      | 9.9088726             | torch.Size([2, 512, 256])        |
| 783     | torch.nn.modules.dropout.Dropout                                                  | head.layers.4.layers.2                            | input               | torch.float32 |         | -18.1953793       | 15.6566381       | 0.0967204      | 9.9088726             | torch.Size([2, 512, 256])        |
| 783     | torch.nn.modules.dropout.Dropout                                                  | head.layers.4.layers.2                            | output              | torch.float32 |         | -18.1953793       | 15.6566381       | 0.0967204      | 9.9088726             | torch.Size([2, 512, 256])        |
| 784     | torch.nn.modules.linear.Linear                                                    | head.layers.4.identity_fc                         | input               | torch.float32 |         | -7.5735970        | 5.9011645        | 0.0024406      | 0.7901992             | torch.Size([2, 512, 512])        |
| 784     | torch.nn.modules.linear.Linear                                                    | head.layers.4.identity_fc                         | weight              | torch.float32 |         | -0.4295534        | 0.5292953        | 0.0001577      | 0.0064885             | torch.Size([256, 512])           |
| 784     | torch.nn.modules.linear.Linear                                                    | head.layers.4.identity_fc                         | bias                | torch.float32 |         | -0.2421585        | 0.1580013        | 0.0014464      | 0.0019690             | torch.Size([256])                |
| 784     | torch.nn.modules.linear.Linear                                                    | head.layers.4.identity_fc                         | output              | torch.float32 |         | -34.7670441       | 19.5001717       | 0.2589243      | 13.9330959            | torch.Size([2, 512, 256])        |
| 785     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.4.short_add                           | input_0             | torch.float32 |         | -34.7670441       | 19.5001717       | 0.2589243      | 13.9330959            | torch.Size([2, 512, 256])        |
| 785     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.4.short_add                           | input_1             | torch.float32 |         | -18.1953793       | 15.6566381       | 0.0967204      | 9.9088726             | torch.Size([2, 512, 256])        |
| 785     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.4.short_add                           | output              | torch.float32 |         | -39.2707901       | 27.6006966       | 0.3556448      | 31.1293316            | torch.Size([2, 512, 256])        |
| 786     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.5.input_mean.mean                     | input_0             | torch.float32 |         | -39.2707901       | 27.6006966       | 0.3556448      | 31.1293316            | torch.Size([2, 512, 256])        |
| 786     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.5.input_mean.mean                     | output              | torch.float32 |         | 0.1517350         | 0.5929797        | 0.3556447      | 0.0047828             | torch.Size([2, 512, 1])          |
| 787     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.5.sub                                 | input_0             | torch.float32 |         | -39.2707901       | 27.6006966       | 0.3556448      | 31.1293316            | torch.Size([2, 512, 256])        |
| 787     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.5.sub                                 | input_1             | torch.float32 |         | 0.1517350         | 0.5929797        | 0.3556447      | 0.0047828             | torch.Size([2, 512, 1])          |
| 787     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.5.sub                                 | output              | torch.float32 |         | -39.6306343       | 27.3586941       | 0.0000000      | 31.1245537            | torch.Size([2, 512, 256])        |
| 788     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.mul                                 | input_0             | torch.float32 |         | -39.6306343       | 27.3586941       | 0.0000000      | 31.1245537            | torch.Size([2, 512, 256])        |
| 788     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.mul                                 | input_1             | torch.float32 |         | -39.6306343       | 27.3586941       | 0.0000000      | 31.1245537            | torch.Size([2, 512, 256])        |
| 788     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.mul                                 | output              | torch.float32 |         | 0.0000000         | 1570.5871582     | 31.1244354     | 5567.2680664          | torch.Size([2, 512, 256])        |
| 789     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.5.var_mean.mean                       | input_0             | torch.float32 |         | 0.0000000         | 1570.5871582     | 31.1244354     | 5567.2680664          | torch.Size([2, 512, 256])        |
| 789     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.5.var_mean.mean                       | output              | torch.float32 |         | 9.5873184         | 64.6063919       | 31.1244354     | 338.5275879           | torch.Size([2, 512, 1])          |
| 790     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.5.rsqrt                               | input               | torch.float32 |         | 9.5873184         | 64.6063919       | 31.1244354     | 338.5275879           | torch.Size([2, 512, 1])          |
| 790     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.5.rsqrt                               | output              | torch.float32 |         | 0.1244120         | 0.3229618        | 0.2000211      | 0.0023688             | torch.Size([2, 512, 1])          |
| 791     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.out_mul                             | input_0             | torch.float32 |         | -39.6306343       | 27.3586941       | 0.0000000      | 31.1245537            | torch.Size([2, 512, 256])        |
| 791     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.out_mul                             | input_1             | torch.float32 |         | 0.1244120         | 0.3229618        | 0.2000211      | 0.0023688             | torch.Size([2, 512, 1])          |
| 791     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.out_mul                             | output              | torch.float32 |         | -8.6833820        | 4.9855881        | 0.0000000      | 1.0000035             | torch.Size([2, 512, 256])        |
| 792     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.5.weight_quant                        | input               | torch.float32 |         | 0.5714198         | 1.0232420        | 0.8086407      | 0.0070534             | torch.Size([256])                |
| 792     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.5.weight_quant                        | output              | torch.float32 |         | 0.5714198         | 1.0232420        | 0.8086407      | 0.0070534             | torch.Size([256])                |
| 793     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.weight_mul                          | input_0             | torch.float32 |         | -8.6833820        | 4.9855881        | 0.0000000      | 1.0000035             | torch.Size([2, 512, 256])        |
| 793     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.weight_mul                          | input_1             | torch.float32 |         | 0.5714198         | 1.0232420        | 0.8086407      | 0.0070534             | torch.Size([256])                |
| 793     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.5.weight_mul                          | output              | torch.float32 |         | -5.9963465        | 3.4641972        | 0.0076130      | 0.6390720             | torch.Size([2, 512, 256])        |
| 794     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.5.bias_quant                          | input               | torch.float32 |         | -0.2882900        | 0.3227517        | -0.0009565     | 0.0035641             | torch.Size([256])                |
| 794     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.5.bias_quant                          | output              | torch.float32 |         | -0.2882900        | 0.3227517        | -0.0009565     | 0.0035641             | torch.Size([256])                |
| 795     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.5.bias_add                            | input_0             | torch.float32 |         | -5.9963465        | 3.4641972        | 0.0076130      | 0.6390720             | torch.Size([2, 512, 256])        |
| 795     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.5.bias_add                            | input_1             | torch.float32 |         | -0.2882900        | 0.3227517        | -0.0009565     | 0.0035641             | torch.Size([256])                |
| 795     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.5.bias_add                            | output              | torch.float32 |         | -5.6735950        | 3.3347695        | 0.0066565      | 0.6038858             | torch.Size([2, 512, 256])        |
| 796     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.add1                                | input_0             | torch.float32 |         | -5.6735950        | 3.3347695        | 0.0066565      | 0.6038858             | torch.Size([2, 512, 256])        |
| 796     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.add1                                | input_1             | torch.float32 |         | -1.2276719        | 7.9945936        | 0.0540784      | 0.8482460             | torch.Size([2, 512, 256])        |
| 796     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.add1                                | output              | torch.float32 |         | -4.9071460        | 7.6460686        | 0.0607350      | 1.1273332             | torch.Size([2, 512, 256])        |
| 797     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.0                            | input               | torch.float32 |         | -4.9071460        | 7.6460686        | 0.0607350      | 1.1273332             | torch.Size([2, 512, 256])        |
| 797     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.0                            | weight              | torch.float32 |         | -0.8907645        | 0.6765569        | -0.0007754     | 0.0049254             | torch.Size([256, 256])           |
| 797     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.0                            | bias                | torch.float32 |         | -0.1592708        | 0.1005408        | -0.0223481     | 0.0024216             | torch.Size([256])                |
| 797     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.0                            | output              | torch.float32 |         | -10.0851126       | 8.7595291        | -0.7448546     | 4.8955340             | torch.Size([2, 512, 256])        |
| 798     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.1                            | input               | torch.float32 |         | 0.0000000         | 8.7595291        | 0.5538393      | 1.1565852             | torch.Size([2, 512, 256])        |
| 798     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.1                            | output              | torch.float32 |         | 0.0000000         | 8.7595291        | 0.5538393      | 1.1565852             | torch.Size([2, 512, 256])        |
| 799     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.2                            | input               | torch.float32 |         | 0.0000000         | 8.7595291        | 0.5538393      | 1.1565852             | torch.Size([2, 512, 256])        |
| 799     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.2                            | weight              | torch.float32 |         | -1.1173502        | 0.6858456        | -0.0046823     | 0.0057485             | torch.Size([256, 256])           |
| 799     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.2                            | bias                | torch.float32 |         | -0.1407867        | 0.1756395        | -0.0032350     | 0.0043332             | torch.Size([256])                |
| 799     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.2                            | output              | torch.float32 |         | -14.3928642       | 10.9395037       | -0.4663885     | 7.4434886             | torch.Size([2, 512, 256])        |
| 800     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.3                            | input               | torch.float32 |         | 0.0000000         | 10.9395037       | 0.8503508      | 1.8361120             | torch.Size([2, 512, 256])        |
| 800     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.3                            | output              | torch.float32 |         | 0.0000000         | 10.9395037       | 0.8503508      | 1.8361120             | torch.Size([2, 512, 256])        |
| 801     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.input_mean.mean            | input_0             | torch.float32 |         | 0.0000000         | 10.9395037       | 0.8503508      | 1.8361120             | torch.Size([2, 512, 256])        |
| 801     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.input_mean.mean            | output              | torch.float32 |         | 0.4660826         | 1.4493030        | 0.8503509      | 0.0378274             | torch.Size([2, 512, 1])          |
| 802     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.6.layers.4.sub                        | input_0             | torch.float32 |         | 0.0000000         | 10.9395037       | 0.8503508      | 1.8361120             | torch.Size([2, 512, 256])        |
| 802     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.6.layers.4.sub                        | input_1             | torch.float32 |         | 0.4660826         | 1.4493030        | 0.8503509      | 0.0378274             | torch.Size([2, 512, 1])          |
| 802     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.6.layers.4.sub                        | output              | torch.float32 |         | -1.4493030        | 9.7696838        | -0.0000000     | 1.7983215             | torch.Size([2, 512, 256])        |
| 803     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.mul                        | input_0             | torch.float32 |         | -1.4493030        | 9.7696838        | -0.0000000     | 1.7983215             | torch.Size([2, 512, 256])        |
| 803     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.mul                        | input_1             | torch.float32 |         | -1.4493030        | 9.7696838        | -0.0000000     | 1.7983215             | torch.Size([2, 512, 256])        |
| 803     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.mul                        | output              | torch.float32 |         | 0.0000000         | 95.4467239       | 1.7983145      | 17.2393799            | torch.Size([2, 512, 256])        |
| 804     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.var_mean.mean              | input_0             | torch.float32 |         | 0.0000000         | 95.4467239       | 1.7983145      | 17.2393799            | torch.Size([2, 512, 256])        |
| 804     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.4.var_mean.mean              | output              | torch.float32 |         | 0.4523011         | 5.2916317        | 1.7983146      | 0.3989958             | torch.Size([2, 512, 1])          |
| 805     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.6.layers.4.rsqrt                      | input               | torch.float32 |         | 0.4523011         | 5.2916317        | 1.7983146      | 0.3989958             | torch.Size([2, 512, 1])          |
| 805     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.6.layers.4.rsqrt                      | output              | torch.float32 |         | 0.4347152         | 1.4868987        | 0.7763517      | 0.0155814             | torch.Size([2, 512, 1])          |
| 806     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.out_mul                    | input_0             | torch.float32 |         | -1.4493030        | 9.7696838        | -0.0000000     | 1.7983215             | torch.Size([2, 512, 256])        |
| 806     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.out_mul                    | input_1             | torch.float32 |         | 0.4347152         | 1.4868987        | 0.7763517      | 0.0155814             | torch.Size([2, 512, 1])          |
| 806     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.out_mul                    | output              | torch.float32 |         | -0.7695232        | 5.9128528        | -0.0000000     | 0.9999976             | torch.Size([2, 512, 256])        |
| 807     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.4.weight_quant               | input               | torch.float32 |         | 0.7643027         | 1.2954148        | 0.9712850      | 0.0065330             | torch.Size([256])                |
| 807     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.4.weight_quant               | output              | torch.float32 |         | 0.7643027         | 1.2954148        | 0.9712850      | 0.0065330             | torch.Size([256])                |
| 808     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.weight_mul                 | input_0             | torch.float32 |         | -0.7695232        | 5.9128528        | -0.0000000     | 0.9999976             | torch.Size([2, 512, 256])        |
| 808     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.weight_mul                 | input_1             | torch.float32 |         | 0.7643027         | 1.2954148        | 0.9712850      | 0.0065330             | torch.Size([256])                |
| 808     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.4.weight_mul                 | output              | torch.float32 |         | -0.9614933        | 6.6603746        | 0.0108248      | 0.9810475             | torch.Size([2, 512, 256])        |
| 809     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.4.bias_quant                 | input               | torch.float32 |         | -0.0766388        | 0.2512619        | 0.0415314      | 0.0046091             | torch.Size([256])                |
| 809     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.4.bias_quant                 | output              | torch.float32 |         | -0.0766388        | 0.2512619        | 0.0415314      | 0.0046091             | torch.Size([256])                |
| 810     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.layers.4.bias_add                   | input_0             | torch.float32 |         | -0.9614933        | 6.6603746        | 0.0108248      | 0.9810475             | torch.Size([2, 512, 256])        |
| 810     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.layers.4.bias_add                   | input_1             | torch.float32 |         | -0.0766388        | 0.2512619        | 0.0415314      | 0.0046091             | torch.Size([256])                |
| 810     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.layers.4.bias_add                   | output              | torch.float32 |         | -0.9598646        | 6.6663828        | 0.0523562      | 0.9350994             | torch.Size([2, 512, 256])        |
| 811     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.5                            | input               | torch.float32 |         | -0.9598646        | 6.6663828        | 0.0523562      | 0.9350994             | torch.Size([2, 512, 256])        |
| 811     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.5                            | weight              | torch.float32 |         | -0.9964333        | 0.5091414        | 0.0013438      | 0.0046180             | torch.Size([256, 256])           |
| 811     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.5                            | bias                | torch.float32 |         | -0.1558311        | 0.1135808        | -0.0241591     | 0.0024907             | torch.Size([256])                |
| 811     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.5                            | output              | torch.float32 |         | -11.3832846       | 12.2415085       | -0.7966509     | 7.7916837             | torch.Size([2, 512, 256])        |
| 812     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.6                            | input               | torch.float32 |         | 0.0000000         | 12.2415085       | 0.7885410      | 2.3507202             | torch.Size([2, 512, 256])        |
| 812     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.6                            | output              | torch.float32 |         | 0.0000000         | 12.2415085       | 0.7885410      | 2.3507202             | torch.Size([2, 512, 256])        |
| 813     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.7                            | input               | torch.float32 |         | 0.0000000         | 12.2415085       | 0.7885410      | 2.3507202             | torch.Size([2, 512, 256])        |
| 813     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.7                            | weight              | torch.float32 |         | -1.0164918        | 0.5062547        | -0.0056709     | 0.0047400             | torch.Size([256, 256])           |
| 813     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.7                            | bias                | torch.float32 |         | -0.0927861        | 0.2361103        | -0.0030607     | 0.0021607             | torch.Size([256])                |
| 813     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.7                            | output              | torch.float32 |         | -32.3634338       | 51.3352623       | -1.5409971     | 30.5906944            | torch.Size([2, 512, 256])        |
| 814     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.8                            | input               | torch.float32 |         | 0.0000000         | 51.3352623       | 1.2687423      | 13.3055000            | torch.Size([2, 512, 256])        |
| 814     | torch.nn.modules.activation.ReLU                                                  | head.layers.6.layers.8                            | output              | torch.float32 |         | 0.0000000         | 51.3352623       | 1.2687423      | 13.3055000            | torch.Size([2, 512, 256])        |
| 815     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.input_mean.mean            | input_0             | torch.float32 |         | 0.0000000         | 51.3352623       | 1.2687423      | 13.3055000            | torch.Size([2, 512, 256])        |
| 815     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.input_mean.mean            | output              | torch.float32 |         | 0.5953735         | 1.9981720        | 1.2687424      | 0.1994407             | torch.Size([2, 512, 1])          |
| 816     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.6.layers.9.sub                        | input_0             | torch.float32 |         | 0.0000000         | 51.3352623       | 1.2687423      | 13.3055000            | torch.Size([2, 512, 256])        |
| 816     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.6.layers.9.sub                        | input_1             | torch.float32 |         | 0.5953735         | 1.9981720        | 1.2687424      | 0.1994407             | torch.Size([2, 512, 1])          |
| 816     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.6.layers.9.sub                        | output              | torch.float32 |         | -1.9981720        | 49.4449997       | 0.0000000      | 13.1062536            | torch.Size([2, 512, 256])        |
| 817     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.mul                        | input_0             | torch.float32 |         | -1.9981720        | 49.4449997       | 0.0000000      | 13.1062536            | torch.Size([2, 512, 256])        |
| 817     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.mul                        | input_1             | torch.float32 |         | -1.9981720        | 49.4449997       | 0.0000000      | 13.1062536            | torch.Size([2, 512, 256])        |
| 817     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.mul                        | output              | torch.float32 |         | 0.0000000         | 2444.8081055     | 13.1062050     | 9155.1103516          | torch.Size([2, 512, 256])        |
| 818     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.var_mean.mean              | input_0             | torch.float32 |         | 0.0000000         | 2444.8081055     | 13.1062050     | 9155.1103516          | torch.Size([2, 512, 256])        |
| 818     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.6.layers.9.var_mean.mean              | output              | torch.float32 |         | 2.6114159         | 29.6991692       | 13.1062031     | 32.1481018            | torch.Size([2, 512, 1])          |
| 819     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.6.layers.9.rsqrt                      | input               | torch.float32 |         | 2.6114159         | 29.6991692       | 13.1062031     | 32.1481018            | torch.Size([2, 512, 1])          |
| 819     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.6.layers.9.rsqrt                      | output              | torch.float32 |         | 0.1834965         | 0.6188154        | 0.2957209      | 0.0039471             | torch.Size([2, 512, 1])          |
| 820     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.out_mul                    | input_0             | torch.float32 |         | -1.9981720        | 49.4449997       | 0.0000000      | 13.1062536            | torch.Size([2, 512, 256])        |
| 820     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.out_mul                    | input_1             | torch.float32 |         | 0.1834965         | 0.6188154        | 0.2957209      | 0.0039471             | torch.Size([2, 512, 1])          |
| 820     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.out_mul                    | output              | torch.float32 |         | -0.6356332        | 10.6408682       | 0.0000000      | 1.0000029             | torch.Size([2, 512, 256])        |
| 821     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.9.weight_quant               | input               | torch.float32 |         | 0.7671473         | 1.2264483        | 0.9391562      | 0.0043200             | torch.Size([256])                |
| 821     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.9.weight_quant               | output              | torch.float32 |         | 0.7671473         | 1.2264483        | 0.9391562      | 0.0043200             | torch.Size([256])                |
| 822     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.weight_mul                 | input_0             | torch.float32 |         | -0.6356332        | 10.6408682       | 0.0000000      | 1.0000029             | torch.Size([2, 512, 256])        |
| 822     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.weight_mul                 | input_1             | torch.float32 |         | 0.7671473         | 1.2264483        | 0.9391562      | 0.0043200             | torch.Size([256])                |
| 822     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.9.weight_mul                 | output              | torch.float32 |         | -0.7795712        | 8.3977642        | -0.0047087     | 0.7456470             | torch.Size([2, 512, 256])        |
| 823     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.9.bias_quant                 | input               | torch.float32 |         | -0.1997112        | 0.1607553        | 0.0453104      | 0.0026038             | torch.Size([256])                |
| 823     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.9.bias_quant                 | output              | torch.float32 |         | -0.1997112        | 0.1607553        | 0.0453104      | 0.0026038             | torch.Size([256])                |
| 824     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.layers.9.bias_add                   | input_0             | torch.float32 |         | -0.7795712        | 8.3977642        | -0.0047087     | 0.7456470             | torch.Size([2, 512, 256])        |
| 824     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.layers.9.bias_add                   | input_1             | torch.float32 |         | -0.1997112        | 0.1607553        | 0.0453104      | 0.0026038             | torch.Size([256])                |
| 824     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.layers.9.bias_add                   | output              | torch.float32 |         | -0.7340225        | 8.2359524        | 0.0406017      | 0.6988801             | torch.Size([2, 512, 256])        |
| 825     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.10                           | input               | torch.float32 |         | -0.7340225        | 8.2359524        | 0.0406017      | 0.6988801             | torch.Size([2, 512, 256])        |
| 825     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.10                           | weight              | torch.float32 |         | -0.4182900        | 0.4529850        | 0.0011075      | 0.0032468             | torch.Size([11, 256])            |
| 825     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.10                           | bias                | torch.float32 |         | -0.0536531        | 0.0304303        | -0.0171225     | 0.0007017             | torch.Size([11])                 |
| 825     | torch.nn.modules.linear.Linear                                                    | head.layers.6.layers.10                           | output              | torch.float32 |         | -14.4727030       | 15.0534325       | -0.2986829     | 4.3531952             | torch.Size([2, 512, 11])         |
| 826     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.11.scale_quant_stub          | input               | torch.float32 |         | 0.1975845         | 1.0542313        | 0.5982738      | 0.1064052             | torch.Size([11])                 |
| 826     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.6.layers.11.scale_quant_stub          | output              | torch.float32 |         | 0.1975845         | 1.0542313        | 0.5982738      | 0.1064052             | torch.Size([11])                 |
| 827     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.11.mul                       | input_0             | torch.float32 |         | -14.4727030       | 15.0534325       | -0.2986829     | 4.3531952             | torch.Size([2, 512, 11])         |
| 827     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.11.mul                       | input_1             | torch.float32 |         | 0.1975845         | 1.0542313        | 0.5982738      | 0.1064052             | torch.Size([11])                 |
| 827     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.6.layers.11.mul                       | output              | torch.float32 |         | -15.2575760       | 15.2059402       | -0.2658673     | 4.2509723             | torch.Size([2, 512, 11])         |
| 828     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.add2                                | input_0             | torch.float32 |         | -15.2575760       | 15.2059402       | -0.2658673     | 4.2509723             | torch.Size([2, 512, 11])         |
| 828     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.add2                                | input_1             | torch.float32 |         | -52.9582825       | 52.8438606       | 0.4784662      | 77.4394913            | torch.Size([2, 512, 11])         |
| 828     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.6.add2                                | output              | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 829     | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant                                      | input               | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 829     | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant                                      | output              | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 830     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 830     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -53.6162720       | 53.6826859       | 0.7275968      | 289.7366333           | torch.Size([2, 512, 3])          |
| 831     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(2)                   | input               | torch.float32 |         | -53.6162720       | 53.6826859       | 0.7275968      | 289.7366333           | torch.Size([2, 512, 3])          |
| 831     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(2)                   | weight              | torch.float32 |         | -0.9216561        | 0.9167990        | -0.0046354     | 0.1373587             | torch.Size([128, 3])             |
| 831     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(2)                   | bias                | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3650480             | torch.Size([128])                |
| 831     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(2)                   | output              | torch.float32 |         | -32.9417343       | 34.4927254       | -0.1053387     | 71.0098190            | torch.Size([2, 512, 128])        |
| 832     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(2)                   | input               | torch.float32 |         | 0.0000000         | 34.4927254       | 2.8821194      | 26.2425671            | torch.Size([2, 512, 128])        |
| 832     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(2)                   | output              | torch.float32 |         | 0.0000000         | 34.4927254       | 2.8821194      | 26.2425671            | torch.Size([2, 512, 128])        |
| 833     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 34.4927254       | 2.8821194      | 26.2425671            | torch.Size([2, 512, 128])        |
| 833     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(2)   | output              | torch.float32 |         | 0.2370655         | 7.3490019        | 2.8821194      | 4.1661963             | torch.Size([2, 512, 1])          |
| 834     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 34.4927254       | 2.8821194      | 26.2425671            | torch.Size([2, 512, 128])        |
| 834     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(2)               | input_1             | torch.float32 |         | 0.2370655         | 7.3490019        | 2.8821194      | 4.1661963             | torch.Size([2, 512, 1])          |
| 834     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(2)               | output              | torch.float32 |         | -7.3490019        | 29.0739384       | 0.0000000      | 22.0804062            | torch.Size([2, 512, 128])        |
| 835     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(2)               | input_0             | torch.float32 |         | -7.3490019        | 29.0739384       | 0.0000000      | 22.0804062            | torch.Size([2, 512, 128])        |
| 835     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(2)               | input_1             | torch.float32 |         | -7.3490019        | 29.0739384       | 0.0000000      | 22.0804062            | torch.Size([2, 512, 128])        |
| 835     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(2)               | output              | torch.float32 |         | 0.0000000         | 845.2938843      | 22.0802402     | 2678.6782227          | torch.Size([2, 512, 128])        |
| 836     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 845.2938843      | 22.0802402     | 2678.6782227          | torch.Size([2, 512, 128])        |
| 836     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(2)     | output              | torch.float32 |         | 0.0984571         | 76.3856354       | 22.0802402     | 481.9913940           | torch.Size([2, 512, 1])          |
| 837     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(2)             | input               | torch.float32 |         | 0.0984571         | 76.3856354       | 22.0802402     | 481.9913940           | torch.Size([2, 512, 1])          |
| 837     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(2)             | output              | torch.float32 |         | 0.1144179         | 3.1867964        | 0.9938890      | 1.6355660             | torch.Size([2, 512, 1])          |
| 838     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(2)           | input_0             | torch.float32 |         | -7.3490019        | 29.0739384       | 0.0000000      | 22.0804062            | torch.Size([2, 512, 128])        |
| 838     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(2)           | input_1             | torch.float32 |         | 0.1144179         | 3.1867964        | 0.9938890      | 1.6355660             | torch.Size([2, 512, 1])          |
| 838     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(2)           | output              | torch.float32 |         | -0.8848932        | 3.8379786        | 0.0000000      | 0.9999814             | torch.Size([2, 512, 128])        |
| 839     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(2)      | input               | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 839     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(2)      | output              | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 840     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(2)        | input_0             | torch.float32 |         | -0.8848932        | 3.8379786        | 0.0000000      | 0.9999814             | torch.Size([2, 512, 128])        |
| 840     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(2)        | input_1             | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 840     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(2)        | output              | torch.float32 |         | -1.0996108        | 4.2770314        | 0.0017662      | 0.9474841             | torch.Size([2, 512, 128])        |
| 841     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(2)        | input               | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 841     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(2)        | output              | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 842     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(2)          | input_0             | torch.float32 |         | -1.0996108        | 4.2770314        | 0.0017662      | 0.9474841             | torch.Size([2, 512, 128])        |
| 842     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(2)          | input_1             | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 842     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(2)          | output              | torch.float32 |         | -1.1033696        | 4.2732725        | 0.0105866      | 0.9407890             | torch.Size([2, 512, 128])        |
| 843     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(2)                   | input               | torch.float32 |         | -1.1033696        | 4.2732725        | 0.0105866      | 0.9407890             | torch.Size([2, 512, 128])        |
| 843     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(2)                   | weight              | torch.float32 |         | -0.3750711        | 0.3968706        | 0.0019093      | 0.0048458             | torch.Size([128, 128])           |
| 843     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(2)                   | bias                | torch.float32 |         | -0.1863807        | 0.1385574        | -0.0156467     | 0.0047256             | torch.Size([128])                |
| 843     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(2)                   | output              | torch.float32 |         | -6.8358879        | 8.2846098        | -0.1074017     | 3.5946441             | torch.Size([2, 512, 128])        |
| 844     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(2)                   | input               | torch.float32 |         | 0.0000000         | 8.2846098        | 0.6519649      | 1.3318114             | torch.Size([2, 512, 128])        |
| 844     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(2)                   | output              | torch.float32 |         | 0.0000000         | 8.2846098        | 0.6519649      | 1.3318114             | torch.Size([2, 512, 128])        |
| 845     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 8.2846098        | 0.6519649      | 1.3318114             | torch.Size([2, 512, 128])        |
| 845     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(2)   | output              | torch.float32 |         | 0.2879469         | 1.3673645        | 0.6519650      | 0.1762632             | torch.Size([2, 512, 1])          |
| 846     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 8.2846098        | 0.6519649      | 1.3318114             | torch.Size([2, 512, 128])        |
| 846     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(2)               | input_1             | torch.float32 |         | 0.2879469         | 1.3673645        | 0.6519650      | 0.1762632             | torch.Size([2, 512, 1])          |
| 846     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(2)               | output              | torch.float32 |         | -1.3673645        | 6.9172454        | -0.0000000     | 1.1557189             | torch.Size([2, 512, 128])        |
| 847     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(2)               | input_0             | torch.float32 |         | -1.3673645        | 6.9172454        | -0.0000000     | 1.1557189             | torch.Size([2, 512, 128])        |
| 847     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(2)               | input_1             | torch.float32 |         | -1.3673645        | 6.9172454        | -0.0000000     | 1.1557189             | torch.Size([2, 512, 128])        |
| 847     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(2)               | output              | torch.float32 |         | 0.0000000         | 47.8482819       | 1.1557101      | 10.0771656            | torch.Size([2, 512, 128])        |
| 848     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 47.8482819       | 1.1557101      | 10.0771656            | torch.Size([2, 512, 128])        |
| 848     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(2)     | output              | torch.float32 |         | 0.3055987         | 3.1224947        | 1.1557102      | 1.2870699             | torch.Size([2, 512, 1])          |
| 849     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(2)             | input               | torch.float32 |         | 0.3055987         | 3.1224947        | 1.1557102      | 1.2870699             | torch.Size([2, 512, 1])          |
| 849     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(2)             | output              | torch.float32 |         | 0.5659114         | 1.8089106        | 1.2247758      | 0.1695880             | torch.Size([2, 512, 1])          |
| 850     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(2)           | input_0             | torch.float32 |         | -1.3673645        | 6.9172454        | -0.0000000     | 1.1557189             | torch.Size([2, 512, 128])        |
| 850     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(2)           | input_1             | torch.float32 |         | 0.5659114         | 1.8089106        | 1.2247758      | 0.1695880             | torch.Size([2, 512, 1])          |
| 850     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(2)           | output              | torch.float32 |         | -0.7813455        | 7.0742917        | -0.0000000     | 0.9999909             | torch.Size([2, 512, 128])        |
| 851     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(2)      | input               | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 851     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(2)      | output              | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 852     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(2)        | input_0             | torch.float32 |         | -0.7813455        | 7.0742917        | -0.0000000     | 0.9999909             | torch.Size([2, 512, 128])        |
| 852     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(2)        | input_1             | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 852     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(2)        | output              | torch.float32 |         | -0.9494418        | 6.9513903        | 0.0377826      | 0.9653031             | torch.Size([2, 512, 128])        |
| 853     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(2)        | input               | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 853     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(2)        | output              | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 854     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(2)          | input_0             | torch.float32 |         | -0.9494418        | 6.9513903        | 0.0377826      | 0.9653031             | torch.Size([2, 512, 128])        |
| 854     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(2)          | input_1             | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 854     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(2)          | output              | torch.float32 |         | -0.9676573        | 6.9478459        | 0.0695849      | 0.9399332             | torch.Size([2, 512, 128])        |
| 855     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(2)                   | input               | torch.float32 |         | -0.9676573        | 6.9478459        | 0.0695849      | 0.9399332             | torch.Size([2, 512, 128])        |
| 855     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(2)                   | weight              | torch.float32 |         | -0.7504157        | 0.4182976        | -0.0024651     | 0.0052447             | torch.Size([128, 128])           |
| 855     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(2)                   | bias                | torch.float32 |         | -0.1397866        | 0.1210779        | 0.0064616      | 0.0040949             | torch.Size([128])                |
| 855     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(2)                   | output              | torch.float32 |         | -10.1913729       | 7.0844812        | -0.0313979     | 5.3926206             | torch.Size([2, 512, 128])        |
| 856     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(2)                   | input               | torch.float32 |         | 0.0000000         | 7.0844812        | 0.8640552      | 1.6895506             | torch.Size([2, 512, 128])        |
| 856     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(2)                   | output              | torch.float32 |         | 0.0000000         | 7.0844812        | 0.8640552      | 1.6895506             | torch.Size([2, 512, 128])        |
| 857     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 7.0844812        | 0.8640552      | 1.6895506             | torch.Size([2, 512, 128])        |
| 857     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(2)   | output              | torch.float32 |         | 0.5517324         | 1.4308867        | 0.8640552      | 0.1074900             | torch.Size([2, 512, 1])          |
| 858     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 7.0844812        | 0.8640552      | 1.6895506             | torch.Size([2, 512, 128])        |
| 858     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(2)               | input_1             | torch.float32 |         | 0.5517324         | 1.4308867        | 0.8640552      | 0.1074900             | torch.Size([2, 512, 1])          |
| 858     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(2)               | output              | torch.float32 |         | -1.4308867        | 6.0952768        | 0.0000000      | 1.5821646             | torch.Size([2, 512, 128])        |
| 859     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(2)               | input_0             | torch.float32 |         | -1.4308867        | 6.0952768        | 0.0000000      | 1.5821646             | torch.Size([2, 512, 128])        |
| 859     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(2)               | input_1             | torch.float32 |         | -1.4308867        | 6.0952768        | 0.0000000      | 1.5821646             | torch.Size([2, 512, 128])        |
| 859     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(2)               | output              | torch.float32 |         | 0.0000000         | 37.1524010       | 1.5821526      | 10.2041121            | torch.Size([2, 512, 128])        |
| 860     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 37.1524010       | 1.5821526      | 10.2041121            | torch.Size([2, 512, 128])        |
| 860     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(2)     | output              | torch.float32 |         | 0.8116332         | 3.0842793        | 1.5821526      | 0.7524666             | torch.Size([2, 512, 1])          |
| 861     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(2)             | input               | torch.float32 |         | 0.8116332         | 3.0842793        | 1.5821526      | 0.7524666             | torch.Size([2, 512, 1])          |
| 861     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(2)             | output              | torch.float32 |         | 0.5694065         | 1.1099857        | 0.8672891      | 0.0325991             | torch.Size([2, 512, 1])          |
| 862     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(2)           | input_0             | torch.float32 |         | -1.4308867        | 6.0952768        | 0.0000000      | 1.5821646             | torch.Size([2, 512, 128])        |
| 862     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(2)           | input_1             | torch.float32 |         | 0.5694065         | 1.1099857        | 0.8672891      | 0.0325991             | torch.Size([2, 512, 1])          |
| 862     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(2)           | output              | torch.float32 |         | -0.8147562        | 5.0289640        | 0.0000000      | 0.9999998             | torch.Size([2, 512, 128])        |
| 863     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(2)      | input               | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 863     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(2)      | output              | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 864     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(2)        | input_0             | torch.float32 |         | -0.8147562        | 5.0289640        | 0.0000000      | 0.9999998             | torch.Size([2, 512, 128])        |
| 864     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(2)        | input_1             | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 864     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(2)        | output              | torch.float32 |         | -0.9165853        | 5.2079158        | 0.0159188      | 0.9927632             | torch.Size([2, 512, 128])        |
| 865     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(2)        | input               | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 865     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(2)        | output              | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 866     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(2)          | input_0             | torch.float32 |         | -0.9165853        | 5.2079158        | 0.0159188      | 0.9927632             | torch.Size([2, 512, 128])        |
| 866     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(2)          | input_1             | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 866     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(2)          | output              | torch.float32 |         | -0.9029599        | 5.2322335        | 0.0375568      | 0.9777087             | torch.Size([2, 512, 128])        |
| 867     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(2)                   | input               | torch.float32 |         | -0.9029599        | 5.2322335        | 0.0375568      | 0.9777087             | torch.Size([2, 512, 128])        |
| 867     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(2)                   | weight              | torch.float32 |         | -0.4264432        | 0.3183554        | 0.0005866      | 0.0053991             | torch.Size([128, 128])           |
| 867     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(2)                   | bias                | torch.float32 |         | -0.1690418        | 0.1536980        | -0.0166056     | 0.0039884             | torch.Size([128])                |
| 867     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(2)                   | output              | torch.float32 |         | -11.8864527       | 10.7730341       | -0.4069732     | 4.4143319             | torch.Size([2, 512, 128])        |
| 868     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(2)                  | input               | torch.float32 |         | 0.0000000         | 10.7730341       | 0.6244364      | 1.5153702             | torch.Size([2, 512, 128])        |
| 868     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(2)                  | output              | torch.float32 |         | 0.0000000         | 10.7730341       | 0.6244364      | 1.5153702             | torch.Size([2, 512, 128])        |
| 869     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(2)  | input_0             | torch.float32 |         | 0.0000000         | 10.7730341       | 0.6244364      | 1.5153702             | torch.Size([2, 512, 128])        |
| 869     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(2)  | output              | torch.float32 |         | 0.5217853         | 0.7459224        | 0.6244363      | 0.0019405             | torch.Size([2, 512, 1])          |
| 870     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(2)              | input_0             | torch.float32 |         | 0.0000000         | 10.7730341       | 0.6244364      | 1.5153702             | torch.Size([2, 512, 128])        |
| 870     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(2)              | input_1             | torch.float32 |         | 0.5217853         | 0.7459224        | 0.6244363      | 0.0019405             | torch.Size([2, 512, 1])          |
| 870     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(2)              | output              | torch.float32 |         | -0.7459224        | 10.2044868       | 0.0000000      | 1.5134317             | torch.Size([2, 512, 128])        |
| 871     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(2)              | input_0             | torch.float32 |         | -0.7459224        | 10.2044868       | 0.0000000      | 1.5134317             | torch.Size([2, 512, 128])        |
| 871     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(2)              | input_1             | torch.float32 |         | -0.7459224        | 10.2044868       | 0.0000000      | 1.5134317             | torch.Size([2, 512, 128])        |
| 871     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(2)              | output              | torch.float32 |         | 0.0000000         | 104.1315536      | 1.5134201      | 23.7109642            | torch.Size([2, 512, 128])        |
| 872     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(2)    | input_0             | torch.float32 |         | 0.0000000         | 104.1315536      | 1.5134201      | 23.7109642            | torch.Size([2, 512, 128])        |
| 872     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(2)    | output              | torch.float32 |         | 1.0696437         | 1.9524162        | 1.5134201      | 0.0561613             | torch.Size([2, 512, 1])          |
| 873     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(2)            | input               | torch.float32 |         | 1.0696437         | 1.9524162        | 1.5134201      | 0.0561613             | torch.Size([2, 512, 1])          |
| 873     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(2)            | output              | torch.float32 |         | 0.7156698         | 0.9668930        | 0.8207742      | 0.0045218             | torch.Size([2, 512, 1])          |
| 874     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(2)          | input_0             | torch.float32 |         | -0.7459224        | 10.2044868       | 0.0000000      | 1.5134317             | torch.Size([2, 512, 128])        |
| 874     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(2)          | input_1             | torch.float32 |         | 0.7156698         | 0.9668930        | 0.8207742      | 0.0045218             | torch.Size([2, 512, 1])          |
| 874     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(2)          | output              | torch.float32 |         | -0.6061832        | 7.8299131        | 0.0000000      | 1.0000008             | torch.Size([2, 512, 128])        |
| 875     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(2)     | input               | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 875     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(2)     | output              | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 876     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(2)       | input_0             | torch.float32 |         | -0.6061832        | 7.8299131        | 0.0000000      | 1.0000008             | torch.Size([2, 512, 128])        |
| 876     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(2)       | input_1             | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 876     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(2)       | output              | torch.float32 |         | -0.8354877        | 7.9092379        | 0.0109208      | 0.9044743             | torch.Size([2, 512, 128])        |
| 877     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(2)       | input               | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 877     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(2)       | output              | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 878     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(2)         | input_0             | torch.float32 |         | -0.8354877        | 7.9092379        | 0.0109208      | 0.9044743             | torch.Size([2, 512, 128])        |
| 878     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(2)         | input_1             | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 878     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(2)         | output              | torch.float32 |         | -0.8396088        | 7.8619442        | 0.0729111      | 0.8689175             | torch.Size([2, 512, 128])        |
| 879     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 879     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -0.7555132        | 2.2162266        | 0.3114250      | 0.3492556             | torch.Size([2, 512, 3])          |
| 880     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(2)                  | input               | torch.float32 |         | -0.7555132        | 2.2162266        | 0.3114250      | 0.3492556             | torch.Size([2, 512, 3])          |
| 880     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(2)                  | weight              | torch.float32 |         | -0.8288664        | 0.6362330        | 0.0683853      | 0.1118651             | torch.Size([32, 3])              |
| 880     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(2)                  | bias                | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1068659             | torch.Size([32])                 |
| 880     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(2)                  | output              | torch.float32 |         | -1.7059633        | 2.1861286        | 0.1271302      | 0.2308163             | torch.Size([2, 512, 32])         |
| 881     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(2)                  | input               | torch.float32 |         | 0.0000000         | 2.1861286        | 0.2623772      | 0.0991941             | torch.Size([2, 512, 32])         |
| 881     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(2)                  | output              | torch.float32 |         | 0.0000000         | 2.1861286        | 0.2623772      | 0.0991941             | torch.Size([2, 512, 32])         |
| 882     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(2)  | input_0             | torch.float32 |         | 0.0000000         | 2.1861286        | 0.2623772      | 0.0991941             | torch.Size([2, 512, 32])         |
| 882     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(2)  | output              | torch.float32 |         | 0.1636032         | 0.6155365        | 0.2623772      | 0.0126854             | torch.Size([2, 512, 1])          |
| 883     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(2)              | input_0             | torch.float32 |         | 0.0000000         | 2.1861286        | 0.2623772      | 0.0991941             | torch.Size([2, 512, 32])         |
| 883     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(2)              | input_1             | torch.float32 |         | 0.1636032         | 0.6155365        | 0.2623772      | 0.0126854             | torch.Size([2, 512, 1])          |
| 883     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(2)              | output              | torch.float32 |         | -0.6155365        | 1.5705922        | 0.0000000      | 0.0865206             | torch.Size([2, 512, 32])         |
| 884     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(2)              | input_0             | torch.float32 |         | -0.6155365        | 1.5705922        | 0.0000000      | 0.0865206             | torch.Size([2, 512, 32])         |
| 884     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(2)              | input_1             | torch.float32 |         | -0.6155365        | 1.5705922        | 0.0000000      | 0.0865206             | torch.Size([2, 512, 32])         |
| 884     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(2)              | output              | torch.float32 |         | 0.0000000         | 2.4667597        | 0.0865180      | 0.0250585             | torch.Size([2, 512, 32])         |
| 885     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(2)    | input_0             | torch.float32 |         | 0.0000000         | 2.4667597        | 0.0865180      | 0.0250585             | torch.Size([2, 512, 32])         |
| 885     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(2)    | output              | torch.float32 |         | 0.0319897         | 0.3483852        | 0.0865180      | 0.0043851             | torch.Size([2, 512, 1])          |
| 886     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(2)            | input               | torch.float32 |         | 0.0319897         | 0.3483852        | 0.0865180      | 0.0043851             | torch.Size([2, 512, 1])          |
| 886     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(2)            | output              | torch.float32 |         | 1.6941971         | 5.5901985        | 4.0683460      | 1.5373698             | torch.Size([2, 512, 1])          |
| 887     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(2)          | input_0             | torch.float32 |         | -0.6155365        | 1.5705922        | 0.0000000      | 0.0865206             | torch.Size([2, 512, 32])         |
| 887     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(2)          | input_1             | torch.float32 |         | 1.6941971         | 5.5901985        | 4.0683460      | 1.5373698             | torch.Size([2, 512, 1])          |
| 887     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(2)          | output              | torch.float32 |         | -1.1060520        | 3.0607896        | 0.0000000      | 0.9998497             | torch.Size([2, 512, 32])         |
| 888     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(2)     | input               | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 888     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(2)     | output              | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 889     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(2)       | input_0             | torch.float32 |         | -1.1060520        | 3.0607896        | 0.0000000      | 0.9998497             | torch.Size([2, 512, 32])         |
| 889     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(2)       | input_1             | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 889     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(2)       | output              | torch.float32 |         | -1.3202647        | 3.2815022        | 0.0062504      | 0.9828917             | torch.Size([2, 512, 32])         |
| 890     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(2)       | input               | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 890     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(2)       | output              | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 891     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(2)         | input_0             | torch.float32 |         | -1.3202647        | 3.2815022        | 0.0062504      | 0.9828917             | torch.Size([2, 512, 32])         |
| 891     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(2)         | input_1             | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 891     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(2)         | output              | torch.float32 |         | -1.2974266        | 3.2778814        | 0.0097766      | 0.9227974             | torch.Size([2, 512, 32])         |
| 892     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(2)                  | input               | torch.float32 |         | -1.2974266        | 3.2778814        | 0.0097766      | 0.9227974             | torch.Size([2, 512, 32])         |
| 892     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(2)                  | weight              | torch.float32 |         | -0.5793310        | 0.5422795        | -0.0032135     | 0.0176575             | torch.Size([32, 32])             |
| 892     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(2)                  | bias                | torch.float32 |         | -0.1716317        | 0.2230143        | 0.0007250      | 0.0126328             | torch.Size([32])                 |
| 892     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(2)                  | output              | torch.float32 |         | -3.9243751        | 2.1548910        | -0.1904061     | 1.4159691             | torch.Size([2, 512, 32])         |
| 893     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(2)                  | input               | torch.float32 |         | 0.0000000         | 2.1548910        | 0.3742334      | 0.2626885             | torch.Size([2, 512, 32])         |
| 893     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(2)                  | output              | torch.float32 |         | 0.0000000         | 2.1548910        | 0.3742334      | 0.2626885             | torch.Size([2, 512, 32])         |
| 894     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(2)  | input_0             | torch.float32 |         | 0.0000000         | 2.1548910        | 0.3742334      | 0.2626885             | torch.Size([2, 512, 32])         |
| 894     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(2)  | output              | torch.float32 |         | 0.3065982         | 0.4214143        | 0.3742334      | 0.0009877             | torch.Size([2, 512, 1])          |
| 895     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(2)              | input_0             | torch.float32 |         | 0.0000000         | 2.1548910        | 0.3742334      | 0.2626885             | torch.Size([2, 512, 32])         |
| 895     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(2)              | input_1             | torch.float32 |         | 0.3065982         | 0.4214143        | 0.3742334      | 0.0009877             | torch.Size([2, 512, 1])          |
| 895     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(2)              | output              | torch.float32 |         | -0.4214143        | 1.7819865        | -0.0000000     | 0.2617017             | torch.Size([2, 512, 32])         |
| 896     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(2)              | input_0             | torch.float32 |         | -0.4214143        | 1.7819865        | -0.0000000     | 0.2617017             | torch.Size([2, 512, 32])         |
| 896     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(2)              | input_1             | torch.float32 |         | -0.4214143        | 1.7819865        | -0.0000000     | 0.2617017             | torch.Size([2, 512, 32])         |
| 896     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(2)              | output              | torch.float32 |         | 0.0000000         | 3.1754758        | 0.2616937      | 0.1987155             | torch.Size([2, 512, 32])         |
| 897     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(2)    | input_0             | torch.float32 |         | 0.0000000         | 3.1754758        | 0.2616937      | 0.1987155             | torch.Size([2, 512, 32])         |
| 897     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(2)    | output              | torch.float32 |         | 0.1546460         | 0.3550466        | 0.2616937      | 0.0051010             | torch.Size([2, 512, 1])          |
| 898     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(2)            | input               | torch.float32 |         | 0.1546460         | 0.3550466        | 0.2616937      | 0.0051010             | torch.Size([2, 512, 1])          |
| 898     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(2)            | output              | torch.float32 |         | 1.6782290         | 2.5428259        | 2.0168691      | 0.0919980             | torch.Size([2, 512, 1])          |
| 899     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(2)          | input_0             | torch.float32 |         | -0.4214143        | 1.7819865        | -0.0000000     | 0.2617017             | torch.Size([2, 512, 32])         |
| 899     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(2)          | input_1             | torch.float32 |         | 1.6782290         | 2.5428259        | 2.0168691      | 0.0919980             | torch.Size([2, 512, 1])          |
| 899     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(2)          | output              | torch.float32 |         | -0.9097229        | 3.4547555        | -0.0000000     | 0.9999889             | torch.Size([2, 512, 32])         |
| 900     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(2)     | input               | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 900     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(2)     | output              | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 901     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(2)       | input_0             | torch.float32 |         | -0.9097229        | 3.4547555        | -0.0000000     | 0.9999889             | torch.Size([2, 512, 32])         |
| 901     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(2)       | input_1             | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 901     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(2)       | output              | torch.float32 |         | -0.9178551        | 3.4310789        | 0.0118067      | 0.9997171             | torch.Size([2, 512, 32])         |
| 902     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(2)       | input               | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 902     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(2)       | output              | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 903     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(2)         | input_0             | torch.float32 |         | -0.9178551        | 3.4310789        | 0.0118067      | 0.9997171             | torch.Size([2, 512, 32])         |
| 903     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(2)         | input_1             | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 903     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(2)         | output              | torch.float32 |         | -0.9008631        | 3.4002523        | 0.0215688      | 0.9666588             | torch.Size([2, 512, 32])         |
| 904     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(2)                  | input               | torch.float32 |         | -0.9008631        | 3.4002523        | 0.0215688      | 0.9666588             | torch.Size([2, 512, 32])         |
| 904     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(2)                  | weight              | torch.float32 |         | -0.5712157        | 0.5219681        | -0.0062917     | 0.0166056             | torch.Size([32, 32])             |
| 904     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(2)                  | bias                | torch.float32 |         | -0.1649730        | 0.2318604        | 0.0253026      | 0.0136139             | torch.Size([32])                 |
| 904     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(2)                  | output              | torch.float32 |         | -4.5177274        | 2.5698485        | -0.2079471     | 1.4097966             | torch.Size([2, 512, 32])         |
| 905     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(2)                  | input               | torch.float32 |         | 0.0000000         | 2.5698485        | 0.3640031      | 0.2710013             | torch.Size([2, 512, 32])         |
| 905     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(2)                  | output              | torch.float32 |         | 0.0000000         | 2.5698485        | 0.3640031      | 0.2710013             | torch.Size([2, 512, 32])         |
| 906     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(2)  | input_0             | torch.float32 |         | 0.0000000         | 2.5698485        | 0.3640031      | 0.2710013             | torch.Size([2, 512, 32])         |
| 906     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(2)  | output              | torch.float32 |         | 0.1859044         | 0.4789934        | 0.3640031      | 0.0106485             | torch.Size([2, 512, 1])          |
| 907     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(2)              | input_0             | torch.float32 |         | 0.0000000         | 2.5698485        | 0.3640031      | 0.2710013             | torch.Size([2, 512, 32])         |
| 907     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(2)              | input_1             | torch.float32 |         | 0.1859044         | 0.4789934        | 0.3640031      | 0.0106485             | torch.Size([2, 512, 1])          |
| 907     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(2)              | output              | torch.float32 |         | -0.4789934        | 2.1409516        | 0.0000000      | 0.2603628             | torch.Size([2, 512, 32])         |
| 908     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(2)              | input_0             | torch.float32 |         | -0.4789934        | 2.1409516        | 0.0000000      | 0.2603628             | torch.Size([2, 512, 32])         |
| 908     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(2)              | input_1             | torch.float32 |         | -0.4789934        | 2.1409516        | 0.0000000      | 0.2603628             | torch.Size([2, 512, 32])         |
| 908     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(2)              | output              | torch.float32 |         | 0.0000000         | 4.5836740        | 0.2603549      | 0.2579591             | torch.Size([2, 512, 32])         |
| 909     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(2)    | input_0             | torch.float32 |         | 0.0000000         | 4.5836740        | 0.2603549      | 0.2579591             | torch.Size([2, 512, 32])         |
| 909     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(2)    | output              | torch.float32 |         | 0.1381102         | 0.3856639        | 0.2603549      | 0.0067550             | torch.Size([2, 512, 1])          |
| 910     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(2)            | input               | torch.float32 |         | 0.1381102         | 0.3856639        | 0.2603549      | 0.0067550             | torch.Size([2, 512, 1])          |
| 910     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(2)            | output              | torch.float32 |         | 1.6102371         | 2.6907382        | 2.0515378      | 0.1461456             | torch.Size([2, 512, 1])          |
| 911     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(2)          | input_0             | torch.float32 |         | -0.4789934        | 2.1409516        | 0.0000000      | 0.2603628             | torch.Size([2, 512, 32])         |
| 911     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(2)          | input_1             | torch.float32 |         | 1.6102371         | 2.6907382        | 2.0515378      | 0.1461456             | torch.Size([2, 512, 1])          |
| 911     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(2)          | output              | torch.float32 |         | -0.9521292        | 3.8033559        | 0.0000000      | 0.9999869             | torch.Size([2, 512, 32])         |
| 912     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(2)     | input               | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 912     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(2)     | output              | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 913     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(2)       | input_0             | torch.float32 |         | -0.9521292        | 3.8033559        | 0.0000000      | 0.9999869             | torch.Size([2, 512, 32])         |
| 913     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(2)       | input_1             | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 913     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(2)       | output              | torch.float32 |         | -1.0773799        | 3.9931498        | 0.0054215      | 1.0230199             | torch.Size([2, 512, 32])         |
| 914     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(2)       | input               | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 914     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(2)       | output              | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 915     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(2)         | input_0             | torch.float32 |         | -1.0773799        | 3.9931498        | 0.0054215      | 1.0230199             | torch.Size([2, 512, 32])         |
| 915     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(2)         | input_1             | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 915     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(2)         | output              | torch.float32 |         | -1.0462992        | 4.0180783        | 0.0096177      | 0.9972329             | torch.Size([2, 512, 32])         |
| 916     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(2)                  | input               | torch.float32 |         | -1.0462992        | 4.0180783        | 0.0096177      | 0.9972329             | torch.Size([2, 512, 32])         |
| 916     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(2)                  | weight              | torch.float32 |         | -0.3204980        | 0.3365203        | -0.0020388     | 0.0145364             | torch.Size([32, 32])             |
| 916     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(2)                  | bias                | torch.float32 |         | -0.1559148        | 0.2119379        | 0.0091616      | 0.0105488             | torch.Size([32])                 |
| 916     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(2)                  | output              | torch.float32 |         | -2.3161788        | 2.6794028        | 0.0128490      | 0.8370656             | torch.Size([2, 512, 32])         |
| 917     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(2)                 | input               | torch.float32 |         | 0.0000000         | 2.6794028        | 0.3677279      | 0.3048746             | torch.Size([2, 512, 32])         |
| 917     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(2)                 | output              | torch.float32 |         | 0.0000000         | 2.6794028        | 0.3677279      | 0.3048746             | torch.Size([2, 512, 32])         |
| 918     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(2) | input_0             | torch.float32 |         | 0.0000000         | 2.6794028        | 0.3677279      | 0.3048746             | torch.Size([2, 512, 32])         |
| 918     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(2) | output              | torch.float32 |         | 0.3016550         | 0.5656128        | 0.3677279      | 0.0013484             | torch.Size([2, 512, 1])          |
| 919     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(2)             | input_0             | torch.float32 |         | 0.0000000         | 2.6794028        | 0.3677279      | 0.3048746             | torch.Size([2, 512, 32])         |
| 919     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(2)             | input_1             | torch.float32 |         | 0.3016550         | 0.5656128        | 0.3677279      | 0.0013484             | torch.Size([2, 512, 1])          |
| 919     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(2)             | output              | torch.float32 |         | -0.5656128        | 2.2678509        | -0.0000000     | 0.3035275             | torch.Size([2, 512, 32])         |
| 920     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(2)             | input_0             | torch.float32 |         | -0.5656128        | 2.2678509        | -0.0000000     | 0.3035275             | torch.Size([2, 512, 32])         |
| 920     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(2)             | input_1             | torch.float32 |         | -0.5656128        | 2.2678509        | -0.0000000     | 0.3035275             | torch.Size([2, 512, 32])         |
| 920     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(2)             | output              | torch.float32 |         | 0.0000000         | 5.1431475        | 0.3035182      | 0.4367643             | torch.Size([2, 512, 32])         |
| 921     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 5.1431475        | 0.3035182      | 0.4367643             | torch.Size([2, 512, 32])         |
| 921     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(2)   | output              | torch.float32 |         | 0.1676078         | 0.4070986        | 0.3035182      | 0.0014319             | torch.Size([2, 512, 1])          |
| 922     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(2)           | input               | torch.float32 |         | 0.1676078         | 0.4070986        | 0.3035182      | 0.0014319             | torch.Size([2, 512, 1])          |
| 922     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(2)           | output              | torch.float32 |         | 1.5672737         | 2.4425302        | 1.8261640      | 0.0142331             | torch.Size([2, 512, 1])          |
| 923     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(2)         | input_0             | torch.float32 |         | -0.5656128        | 2.2678509        | -0.0000000     | 0.3035275             | torch.Size([2, 512, 32])         |
| 923     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(2)         | input_1             | torch.float32 |         | 1.5672737         | 2.4425302        | 1.8261640      | 0.0142331             | torch.Size([2, 512, 1])          |
| 923     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(2)         | output              | torch.float32 |         | -1.2022386        | 3.7352085        | 0.0000000      | 0.9999970             | torch.Size([2, 512, 32])         |
| 924     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(2)    | input               | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 924     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(2)    | output              | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 925     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(2)      | input_0             | torch.float32 |         | -1.2022386        | 3.7352085        | 0.0000000      | 0.9999970             | torch.Size([2, 512, 32])         |
| 925     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(2)      | input_1             | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 925     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(2)      | output              | torch.float32 |         | -1.8221040        | 4.4406428        | -0.0402247     | 1.4033529             | torch.Size([2, 512, 32])         |
| 926     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(2)      | input               | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 926     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(2)      | output              | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 927     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(2)        | input_0             | torch.float32 |         | -1.8221040        | 4.4406428        | -0.0402247     | 1.4033529             | torch.Size([2, 512, 32])         |
| 927     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(2)        | input_1             | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 927     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(2)        | output              | torch.float32 |         | -1.7287153        | 4.4025292        | 0.0043439      | 1.3084153             | torch.Size([2, 512, 32])         |
| 928     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 928     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -0.9651139        | 0.4353507        | -0.0251088     | 0.0223062             | torch.Size([2, 512, 2])          |
| 929     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(2)                   | input               | torch.float32 |         | -0.9651139        | 0.4353507        | -0.0251088     | 0.0223062             | torch.Size([2, 512, 2])          |
| 929     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(2)                   | weight              | torch.float32 |         | -0.7023237        | 0.7394427        | 0.0490668      | 0.1972211             | torch.Size([32, 2])              |
| 929     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(2)                   | bias                | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1641774             | torch.Size([32])                 |
| 929     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(2)                   | output              | torch.float32 |         | -1.5121145        | 1.0084068        | -0.1199609     | 0.1678847             | torch.Size([2, 512, 32])         |
| 930     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(2)                   | input               | torch.float32 |         | 0.0000000         | 1.0084068        | 0.1246489      | 0.0476259             | torch.Size([2, 512, 32])         |
| 930     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(2)                   | output              | torch.float32 |         | 0.0000000         | 1.0084068        | 0.1246489      | 0.0476259             | torch.Size([2, 512, 32])         |
| 931     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 1.0084068        | 0.1246489      | 0.0476259             | torch.Size([2, 512, 32])         |
| 931     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(2)   | output              | torch.float32 |         | 0.1118298         | 0.1654204        | 0.1246488      | 0.0000488             | torch.Size([2, 512, 1])          |
| 932     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 1.0084068        | 0.1246489      | 0.0476259             | torch.Size([2, 512, 32])         |
| 932     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(2)               | input_1             | torch.float32 |         | 0.1118298         | 0.1654204        | 0.1246488      | 0.0000488             | torch.Size([2, 512, 1])          |
| 932     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(2)               | output              | torch.float32 |         | -0.1654204        | 0.8429864        | 0.0000000      | 0.0475772             | torch.Size([2, 512, 32])         |
| 933     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(2)               | input_0             | torch.float32 |         | -0.1654204        | 0.8429864        | 0.0000000      | 0.0475772             | torch.Size([2, 512, 32])         |
| 933     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(2)               | input_1             | torch.float32 |         | -0.1654204        | 0.8429864        | 0.0000000      | 0.0475772             | torch.Size([2, 512, 32])         |
| 933     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(2)               | output              | torch.float32 |         | 0.0000000         | 0.7106261        | 0.0475758      | 0.0073988             | torch.Size([2, 512, 32])         |
| 934     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 0.7106261        | 0.0475758      | 0.0073988             | torch.Size([2, 512, 32])         |
| 934     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(2)     | output              | torch.float32 |         | 0.0407636         | 0.0862967        | 0.0475758      | 0.0000433             | torch.Size([2, 512, 1])          |
| 935     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(2)             | input               | torch.float32 |         | 0.0407636         | 0.0862967        | 0.0475758      | 0.0000433             | torch.Size([2, 512, 1])          |
| 935     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(2)             | output              | torch.float32 |         | 3.4039071         | 4.9523416        | 4.6102757      | 0.0675872             | torch.Size([2, 512, 1])          |
| 936     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(2)           | input_0             | torch.float32 |         | -0.1654204        | 0.8429864        | 0.0000000      | 0.0475772             | torch.Size([2, 512, 32])         |
| 936     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(2)           | input_1             | torch.float32 |         | 3.4039071         | 4.9523416        | 4.6102757      | 0.0675872             | torch.Size([2, 512, 1])          |
| 936     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(2)           | output              | torch.float32 |         | -0.6339521        | 3.3613002        | -0.0000000     | 0.9998173             | torch.Size([2, 512, 32])         |
| 937     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(2)      | input               | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 937     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(2)      | output              | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 938     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(2)        | input_0             | torch.float32 |         | -0.6339521        | 3.3613002        | -0.0000000     | 0.9998173             | torch.Size([2, 512, 32])         |
| 938     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(2)        | input_1             | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 938     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(2)        | output              | torch.float32 |         | -0.7447882        | 3.6397390        | 0.0038731      | 1.0142778             | torch.Size([2, 512, 32])         |
| 939     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(2)        | input               | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 939     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(2)        | output              | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 940     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(2)          | input_0             | torch.float32 |         | -0.7447882        | 3.6397390        | 0.0038731      | 1.0142778             | torch.Size([2, 512, 32])         |
| 940     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(2)          | input_1             | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 940     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(2)          | output              | torch.float32 |         | -0.7117284        | 3.5592775        | 0.0323770      | 0.9305902             | torch.Size([2, 512, 32])         |
| 941     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(2)                   | input               | torch.float32 |         | -0.7117284        | 3.5592775        | 0.0323770      | 0.9305902             | torch.Size([2, 512, 32])         |
| 941     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(2)                   | weight              | torch.float32 |         | -1.0547366        | 0.5812716        | 0.0070099      | 0.0187704             | torch.Size([32, 32])             |
| 941     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(2)                   | bias                | torch.float32 |         | -0.2183180        | 0.1396109        | -0.0140744     | 0.0103446             | torch.Size([32])                 |
| 941     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(2)                   | output              | torch.float32 |         | -4.4380188        | 1.6263825        | -0.5659347     | 1.4994545             | torch.Size([2, 512, 32])         |
| 942     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(2)                   | input               | torch.float32 |         | 0.0000000         | 1.6263825        | 0.2262248      | 0.1221817             | torch.Size([2, 512, 32])         |
| 942     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(2)                   | output              | torch.float32 |         | 0.0000000         | 1.6263825        | 0.2262248      | 0.1221817             | torch.Size([2, 512, 32])         |
| 943     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 1.6263825        | 0.2262248      | 0.1221817             | torch.Size([2, 512, 32])         |
| 943     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(2)   | output              | torch.float32 |         | 0.1938623         | 0.2472338        | 0.2262248      | 0.0000885             | torch.Size([2, 512, 1])          |
| 944     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 1.6263825        | 0.2262248      | 0.1221817             | torch.Size([2, 512, 32])         |
| 944     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(2)               | input_1             | torch.float32 |         | 0.1938623         | 0.2472338        | 0.2262248      | 0.0000885             | torch.Size([2, 512, 1])          |
| 944     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(2)               | output              | torch.float32 |         | -0.2472338        | 1.3791487        | -0.0000000     | 0.1220933             | torch.Size([2, 512, 32])         |
| 945     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(2)               | input_0             | torch.float32 |         | -0.2472338        | 1.3791487        | -0.0000000     | 0.1220933             | torch.Size([2, 512, 32])         |
| 945     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(2)               | input_1             | torch.float32 |         | -0.2472338        | 1.3791487        | -0.0000000     | 0.1220933             | torch.Size([2, 512, 32])         |
| 945     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(2)               | output              | torch.float32 |         | 0.0000000         | 1.9020512        | 0.1220895      | 0.0502800             | torch.Size([2, 512, 32])         |
| 946     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 1.9020512        | 0.1220895      | 0.0502800             | torch.Size([2, 512, 32])         |
| 946     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(2)     | output              | torch.float32 |         | 0.0947483         | 0.1517984        | 0.1220895      | 0.0000660             | torch.Size([2, 512, 1])          |
| 947     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(2)             | input               | torch.float32 |         | 0.0947483         | 0.1517984        | 0.1220895      | 0.0000660             | torch.Size([2, 512, 1])          |
| 947     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(2)             | output              | torch.float32 |         | 2.5665643         | 3.2485628        | 2.8666201      | 0.0092769             | torch.Size([2, 512, 1])          |
| 948     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(2)           | input_0             | torch.float32 |         | -0.2472338        | 1.3791487        | -0.0000000     | 0.1220933             | torch.Size([2, 512, 32])         |
| 948     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(2)           | input_1             | torch.float32 |         | 2.5665643         | 3.2485628        | 2.8666201      | 0.0092769             | torch.Size([2, 512, 1])          |
| 948     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(2)           | output              | torch.float32 |         | -0.6708153        | 3.5396738        | 0.0000000      | 0.9999482             | torch.Size([2, 512, 32])         |
| 949     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(2)      | input               | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 949     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(2)      | output              | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 950     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(2)        | input_0             | torch.float32 |         | -0.6708153        | 3.5396738        | 0.0000000      | 0.9999482             | torch.Size([2, 512, 32])         |
| 950     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(2)        | input_1             | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 950     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(2)        | output              | torch.float32 |         | -0.7511905        | 3.6121757        | -0.0016751     | 0.9830262             | torch.Size([2, 512, 32])         |
| 951     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(2)        | input               | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 951     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(2)        | output              | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 952     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(2)          | input_0             | torch.float32 |         | -0.7511905        | 3.6121757        | -0.0016751     | 0.9830262             | torch.Size([2, 512, 32])         |
| 952     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(2)          | input_1             | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 952     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(2)          | output              | torch.float32 |         | -0.7364964        | 3.5713594        | 0.0225692      | 0.9203503             | torch.Size([2, 512, 32])         |
| 953     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(2)                   | input               | torch.float32 |         | -0.7364964        | 3.5713594        | 0.0225692      | 0.9203503             | torch.Size([2, 512, 32])         |
| 953     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(2)                   | weight              | torch.float32 |         | -0.4480607        | 0.3678726        | 0.0004879      | 0.0160908             | torch.Size([32, 32])             |
| 953     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(2)                   | bias                | torch.float32 |         | -0.1861591        | 0.1739754        | 0.0155446      | 0.0137690             | torch.Size([32])                 |
| 953     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(2)                   | output              | torch.float32 |         | -3.6282008        | 1.8509508        | -0.3466522     | 1.7009215             | torch.Size([2, 512, 32])         |
| 954     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(2)                   | input               | torch.float32 |         | 0.0000000         | 1.8509508        | 0.3371688      | 0.1873253             | torch.Size([2, 512, 32])         |
| 954     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(2)                   | output              | torch.float32 |         | 0.0000000         | 1.8509508        | 0.3371688      | 0.1873253             | torch.Size([2, 512, 32])         |
| 955     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 1.8509508        | 0.3371688      | 0.1873253             | torch.Size([2, 512, 32])         |
| 955     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(2)   | output              | torch.float32 |         | 0.2840724         | 0.3589314        | 0.3371688      | 0.0000567             | torch.Size([2, 512, 1])          |
| 956     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 1.8509508        | 0.3371688      | 0.1873253             | torch.Size([2, 512, 32])         |
| 956     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(2)               | input_1             | torch.float32 |         | 0.2840724         | 0.3589314        | 0.3371688      | 0.0000567             | torch.Size([2, 512, 1])          |
| 956     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(2)               | output              | torch.float32 |         | -0.3589314        | 1.5352876        | -0.0000000     | 0.1872686             | torch.Size([2, 512, 32])         |
| 957     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(2)               | input_0             | torch.float32 |         | -0.3589314        | 1.5352876        | -0.0000000     | 0.1872686             | torch.Size([2, 512, 32])         |
| 957     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(2)               | input_1             | torch.float32 |         | -0.3589314        | 1.5352876        | -0.0000000     | 0.1872686             | torch.Size([2, 512, 32])         |
| 957     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(2)               | output              | torch.float32 |         | 0.0000000         | 2.3571081        | 0.1872629      | 0.0651890             | torch.Size([2, 512, 32])         |
| 958     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 2.3571081        | 0.1872629      | 0.0651890             | torch.Size([2, 512, 32])         |
| 958     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(2)     | output              | torch.float32 |         | 0.1609926         | 0.2138003        | 0.1872629      | 0.0000477             | torch.Size([2, 512, 1])          |
| 959     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(2)             | input               | torch.float32 |         | 0.1609926         | 0.2138003        | 0.1872629      | 0.0000477             | torch.Size([2, 512, 1])          |
| 959     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(2)             | output              | torch.float32 |         | 2.1626472         | 2.4922037        | 2.3119264      | 0.0016769             | torch.Size([2, 512, 1])          |
| 960     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(2)           | input_0             | torch.float32 |         | -0.3589314        | 1.5352876        | -0.0000000     | 0.1872686             | torch.Size([2, 512, 32])         |
| 960     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(2)           | input_1             | torch.float32 |         | 2.1626472         | 2.4922037        | 2.3119264      | 0.0016769             | torch.Size([2, 512, 1])          |
| 960     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(2)           | output              | torch.float32 |         | -0.8493357        | 3.3391380        | -0.0000000     | 0.9999770             | torch.Size([2, 512, 32])         |
| 961     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(2)      | input               | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 961     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(2)      | output              | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 962     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(2)        | input_0             | torch.float32 |         | -0.8493357        | 3.3391380        | -0.0000000     | 0.9999770             | torch.Size([2, 512, 32])         |
| 962     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(2)        | input_1             | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 962     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(2)        | output              | torch.float32 |         | -0.9419520        | 3.3888872        | -0.0061730     | 0.9916599             | torch.Size([2, 512, 32])         |
| 963     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(2)        | input               | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 963     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(2)        | output              | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 964     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(2)          | input_0             | torch.float32 |         | -0.9419520        | 3.3888872        | -0.0061730     | 0.9916599             | torch.Size([2, 512, 32])         |
| 964     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(2)          | input_1             | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 964     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(2)          | output              | torch.float32 |         | -0.9404687        | 3.3738124        | 0.0009967      | 0.9633345             | torch.Size([2, 512, 32])         |
| 965     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(2)                   | input               | torch.float32 |         | -0.9404687        | 3.3738124        | 0.0009967      | 0.9633345             | torch.Size([2, 512, 32])         |
| 965     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(2)                   | weight              | torch.float32 |         | -0.5597425        | 0.7001730        | 0.0015679      | 0.0160348             | torch.Size([32, 32])             |
| 965     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(2)                   | bias                | torch.float32 |         | -0.1810580        | 0.1736723        | -0.0279047     | 0.0091159             | torch.Size([32])                 |
| 965     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(2)                   | output              | torch.float32 |         | -4.3101912        | 3.0585363        | -0.2516137     | 1.3247352             | torch.Size([2, 512, 32])         |
| 966     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(2)                  | input               | torch.float32 |         | 0.0000000         | 3.0585363        | 0.2841983      | 0.3867418             | torch.Size([2, 512, 32])         |
| 966     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(2)                  | output              | torch.float32 |         | 0.0000000         | 3.0585363        | 0.2841983      | 0.3867418             | torch.Size([2, 512, 32])         |
| 967     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(2)  | input_0             | torch.float32 |         | 0.0000000         | 3.0585363        | 0.2841983      | 0.3867418             | torch.Size([2, 512, 32])         |
| 967     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(2)  | output              | torch.float32 |         | 0.2224636         | 0.3682830        | 0.2841983      | 0.0007586             | torch.Size([2, 512, 1])          |
| 968     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(2)              | input_0             | torch.float32 |         | 0.0000000         | 3.0585363        | 0.2841983      | 0.3867418             | torch.Size([2, 512, 32])         |
| 968     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(2)              | input_1             | torch.float32 |         | 0.2224636         | 0.3682830        | 0.2841983      | 0.0007586             | torch.Size([2, 512, 1])          |
| 968     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(2)              | output              | torch.float32 |         | -0.3682830        | 2.7860217        | 0.0000000      | 0.3859839             | torch.Size([2, 512, 32])         |
| 969     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(2)              | input_0             | torch.float32 |         | -0.3682830        | 2.7860217        | 0.0000000      | 0.3859839             | torch.Size([2, 512, 32])         |
| 969     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(2)              | input_1             | torch.float32 |         | -0.3682830        | 2.7860217        | 0.0000000      | 0.3859839             | torch.Size([2, 512, 32])         |
| 969     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(2)              | output              | torch.float32 |         | 0.0000000         | 7.7619171        | 0.3859721      | 1.4445884             | torch.Size([2, 512, 32])         |
| 970     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(2)    | input_0             | torch.float32 |         | 0.0000000         | 7.7619171        | 0.3859721      | 1.4445884             | torch.Size([2, 512, 32])         |
| 970     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(2)    | output              | torch.float32 |         | 0.2439761         | 0.4182774        | 0.3859721      | 0.0010355             | torch.Size([2, 512, 1])          |
| 971     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(2)            | input               | torch.float32 |         | 0.2439761         | 0.4182774        | 0.3859721      | 0.0010355             | torch.Size([2, 512, 1])          |
| 971     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(2)            | output              | torch.float32 |         | 1.5461890         | 2.0244987        | 1.6144532      | 0.0059219             | torch.Size([2, 512, 1])          |
| 972     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(2)          | input_0             | torch.float32 |         | -0.3682830        | 2.7860217        | 0.0000000      | 0.3859839             | torch.Size([2, 512, 32])         |
| 972     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(2)          | input_1             | torch.float32 |         | 1.5461890         | 2.0244987        | 1.6144532      | 0.0059219             | torch.Size([2, 512, 1])          |
| 972     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(2)          | output              | torch.float32 |         | -0.6761915        | 4.7805104        | -0.0000000     | 1.0000044             | torch.Size([2, 512, 32])         |
| 973     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(2)     | input               | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 973     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(2)     | output              | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 974     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(2)       | input_0             | torch.float32 |         | -0.6761915        | 4.7805104        | -0.0000000     | 1.0000044             | torch.Size([2, 512, 32])         |
| 974     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(2)       | input_1             | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 974     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(2)       | output              | torch.float32 |         | -0.9932134        | 3.9983709        | -0.0718206     | 0.8187840             | torch.Size([2, 512, 32])         |
| 975     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(2)       | input               | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 975     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(2)       | output              | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 976     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(2)         | input_0             | torch.float32 |         | -0.9932134        | 3.9983709        | -0.0718206     | 0.8187840             | torch.Size([2, 512, 32])         |
| 976     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(2)         | input_1             | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 976     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(2)         | output              | torch.float32 |         | -0.8190021        | 3.9102628        | 0.0085585      | 0.7140862             | torch.Size([2, 512, 32])         |
| 977     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 977     | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -2.3831592        | 0.3610535        | -0.2427534     | 0.4327180             | torch.Size([2, 512, 3])          |
| 978     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(2)                   | input               | torch.float32 |         | -2.3831592        | 0.3610535        | -0.2427534     | 0.4327180             | torch.Size([2, 512, 3])          |
| 978     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(2)                   | weight              | torch.float32 |         | -1.0475703        | 0.9848034        | -0.0054673     | 0.2080412             | torch.Size([64, 3])              |
| 978     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(2)                   | bias                | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1294928             | torch.Size([64])                 |
| 978     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(2)                   | output              | torch.float32 |         | -2.0901880        | 1.5541592        | -0.0847435     | 0.3051605             | torch.Size([2, 512, 64])         |
| 979     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(2)                   | input               | torch.float32 |         | 0.0000000         | 1.5541592        | 0.1726534      | 0.0672369             | torch.Size([2, 512, 64])         |
| 979     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(2)                   | output              | torch.float32 |         | 0.0000000         | 1.5541592        | 0.1726534      | 0.0672369             | torch.Size([2, 512, 64])         |
| 980     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 1.5541592        | 0.1726534      | 0.0672369             | torch.Size([2, 512, 64])         |
| 980     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(2)   | output              | torch.float32 |         | 0.1212779         | 0.2994516        | 0.1726534      | 0.0053407             | torch.Size([2, 512, 1])          |
| 981     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 1.5541592        | 0.1726534      | 0.0672369             | torch.Size([2, 512, 64])         |
| 981     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(2)               | input_1             | torch.float32 |         | 0.1212779         | 0.2994516        | 0.1726534      | 0.0053407             | torch.Size([2, 512, 1])          |
| 981     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(2)               | output              | torch.float32 |         | -0.2994516        | 1.2547076        | 0.0000000      | 0.0619013             | torch.Size([2, 512, 64])         |
| 982     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(2)               | input_0             | torch.float32 |         | -0.2994516        | 1.2547076        | 0.0000000      | 0.0619013             | torch.Size([2, 512, 64])         |
| 982     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(2)               | input_1             | torch.float32 |         | -0.2994516        | 1.2547076        | 0.0000000      | 0.0619013             | torch.Size([2, 512, 64])         |
| 982     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(2)               | output              | torch.float32 |         | 0.0000000         | 1.5742911        | 0.0619004      | 0.0249501             | torch.Size([2, 512, 64])         |
| 983     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 1.5742911        | 0.0619004      | 0.0249501             | torch.Size([2, 512, 64])         |
| 983     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(2)     | output              | torch.float32 |         | 0.0269039         | 0.1557724        | 0.0619004      | 0.0029172             | torch.Size([2, 512, 1])          |
| 984     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(2)             | input               | torch.float32 |         | 0.0269039         | 0.1557724        | 0.0619004      | 0.0029172             | torch.Size([2, 512, 1])          |
| 984     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(2)             | output              | torch.float32 |         | 2.5336163         | 6.0955377        | 4.9326692      | 2.0009181             | torch.Size([2, 512, 1])          |
| 985     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(2)           | input_0             | torch.float32 |         | -0.2994516        | 1.2547076        | 0.0000000      | 0.0619013             | torch.Size([2, 512, 64])         |
| 985     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(2)           | input_1             | torch.float32 |         | 2.5336163         | 6.0955377        | 4.9326692      | 2.0009181             | torch.Size([2, 512, 1])          |
| 985     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(2)           | output              | torch.float32 |         | -0.8026100        | 3.1825380        | 0.0000000      | 0.9997520             | torch.Size([2, 512, 64])         |
| 986     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(2)      | input               | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 986     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(2)      | output              | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 987     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(2)        | input_0             | torch.float32 |         | -0.8026100        | 3.1825380        | 0.0000000      | 0.9997520             | torch.Size([2, 512, 64])         |
| 987     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(2)        | input_1             | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 987     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(2)        | output              | torch.float32 |         | -0.8958735        | 3.0962417        | 0.0121864      | 0.9439818             | torch.Size([2, 512, 64])         |
| 988     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(2)        | input               | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 988     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(2)        | output              | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 989     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(2)          | input_0             | torch.float32 |         | -0.8958735        | 3.0962417        | 0.0121864      | 0.9439818             | torch.Size([2, 512, 64])         |
| 989     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(2)          | input_1             | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 989     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(2)          | output              | torch.float32 |         | -0.8890252        | 3.0502450        | 0.0426404      | 0.8485209             | torch.Size([2, 512, 64])         |
| 990     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(2)                   | input               | torch.float32 |         | -0.8890252        | 3.0502450        | 0.0426404      | 0.8485209             | torch.Size([2, 512, 64])         |
| 990     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(2)                   | weight              | torch.float32 |         | -0.4523612        | 0.4813256        | -0.0014562     | 0.0096743             | torch.Size([64, 64])             |
| 990     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(2)                   | bias                | torch.float32 |         | -0.1183558        | 0.2243176        | 0.0150283      | 0.0049289             | torch.Size([64])                 |
| 990     | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(2)                   | output              | torch.float32 |         | -5.3917027        | 2.7284329        | -0.4395337     | 2.2256398             | torch.Size([2, 512, 64])         |
| 991     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(2)                   | input               | torch.float32 |         | 0.0000000         | 2.7284329        | 0.3197998      | 0.2123113             | torch.Size([2, 512, 64])         |
| 991     | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(2)                   | output              | torch.float32 |         | 0.0000000         | 2.7284329        | 0.3197998      | 0.2123113             | torch.Size([2, 512, 64])         |
| 992     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 2.7284329        | 0.3197998      | 0.2123113             | torch.Size([2, 512, 64])         |
| 992     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(2)   | output              | torch.float32 |         | 0.2069173         | 0.4454197        | 0.3197998      | 0.0053795             | torch.Size([2, 512, 1])          |
| 993     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 2.7284329        | 0.3197998      | 0.2123113             | torch.Size([2, 512, 64])         |
| 993     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(2)               | input_1             | torch.float32 |         | 0.2069173         | 0.4454197        | 0.3197998      | 0.0053795             | torch.Size([2, 512, 1])          |
| 993     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(2)               | output              | torch.float32 |         | -0.4454197        | 2.3290696        | 0.0000000      | 0.2069370             | torch.Size([2, 512, 64])         |
| 994     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(2)               | input_0             | torch.float32 |         | -0.4454197        | 2.3290696        | 0.0000000      | 0.2069370             | torch.Size([2, 512, 64])         |
| 994     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(2)               | input_1             | torch.float32 |         | -0.4454197        | 2.3290696        | 0.0000000      | 0.2069370             | torch.Size([2, 512, 64])         |
| 994     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(2)               | output              | torch.float32 |         | 0.0000000         | 5.4245653        | 0.2069339      | 0.2112477             | torch.Size([2, 512, 64])         |
| 995     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 5.4245653        | 0.2069339      | 0.2112477             | torch.Size([2, 512, 64])         |
| 995     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(2)     | output              | torch.float32 |         | 0.0835847         | 0.3561361        | 0.2069339      | 0.0055871             | torch.Size([2, 512, 1])          |
| 996     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(2)             | input               | torch.float32 |         | 0.0835847         | 0.3561361        | 0.2069339      | 0.0055871             | torch.Size([2, 512, 1])          |
| 996     | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(2)             | output              | torch.float32 |         | 1.6756600         | 3.4586823        | 2.3769779      | 0.3994042             | torch.Size([2, 512, 1])          |
| 997     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(2)           | input_0             | torch.float32 |         | -0.4454197        | 2.3290696        | 0.0000000      | 0.2069370             | torch.Size([2, 512, 64])         |
| 997     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(2)           | input_1             | torch.float32 |         | 1.6756600         | 3.4586823        | 2.3769779      | 0.3994042             | torch.Size([2, 512, 1])          |
| 997     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(2)           | output              | torch.float32 |         | -0.8511918        | 4.3752952        | 0.0000000      | 0.9999548             | torch.Size([2, 512, 64])         |
| 998     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(2)      | input               | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 998     | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(2)      | output              | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 999     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(2)        | input_0             | torch.float32 |         | -0.8511918        | 4.3752952        | 0.0000000      | 0.9999548             | torch.Size([2, 512, 64])         |
| 999     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(2)        | input_1             | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 999     | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(2)        | output              | torch.float32 |         | -0.9105418        | 4.2523255        | 0.0044145      | 0.9903970             | torch.Size([2, 512, 64])         |
| 1000    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(2)        | input               | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1000    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(2)        | output              | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1001    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(2)          | input_0             | torch.float32 |         | -0.9105418        | 4.2523255        | 0.0044145      | 0.9903970             | torch.Size([2, 512, 64])         |
| 1001    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(2)          | input_1             | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1001    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(2)          | output              | torch.float32 |         | -0.8755423        | 4.2088971        | 0.0209089      | 0.9424049             | torch.Size([2, 512, 64])         |
| 1002    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(2)                   | input               | torch.float32 |         | -0.8755423        | 4.2088971        | 0.0209089      | 0.9424049             | torch.Size([2, 512, 64])         |
| 1002    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(2)                   | weight              | torch.float32 |         | -0.5707353        | 0.3620123        | -0.0010372     | 0.0088292             | torch.Size([64, 64])             |
| 1002    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(2)                   | bias                | torch.float32 |         | -0.1720246        | 0.1340137        | -0.0235144     | 0.0050507             | torch.Size([64])                 |
| 1002    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(2)                   | output              | torch.float32 |         | -5.3964925        | 3.7286351        | -0.3599727     | 2.2601025             | torch.Size([2, 512, 64])         |
| 1003    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(2)                   | input               | torch.float32 |         | 0.0000000         | 3.7286351        | 0.4535674      | 0.5237303             | torch.Size([2, 512, 64])         |
| 1003    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(2)                   | output              | torch.float32 |         | 0.0000000         | 3.7286351        | 0.4535674      | 0.5237303             | torch.Size([2, 512, 64])         |
| 1004    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(2)   | input_0             | torch.float32 |         | 0.0000000         | 3.7286351        | 0.4535674      | 0.5237303             | torch.Size([2, 512, 64])         |
| 1004    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(2)   | output              | torch.float32 |         | 0.3542838         | 0.5193402        | 0.4535674      | 0.0035812             | torch.Size([2, 512, 1])          |
| 1005    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(2)               | input_0             | torch.float32 |         | 0.0000000         | 3.7286351        | 0.4535674      | 0.5237303             | torch.Size([2, 512, 64])         |
| 1005    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(2)               | input_1             | torch.float32 |         | 0.3542838         | 0.5193402        | 0.4535674      | 0.0035812             | torch.Size([2, 512, 1])          |
| 1005    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(2)               | output              | torch.float32 |         | -0.5193402        | 3.2381949        | -0.0000000     | 0.5201525             | torch.Size([2, 512, 64])         |
| 1006    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(2)               | input_0             | torch.float32 |         | -0.5193402        | 3.2381949        | -0.0000000     | 0.5201525             | torch.Size([2, 512, 64])         |
| 1006    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(2)               | input_1             | torch.float32 |         | -0.5193402        | 3.2381949        | -0.0000000     | 0.5201525             | torch.Size([2, 512, 64])         |
| 1006    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(2)               | output              | torch.float32 |         | 0.0000000         | 10.4859066       | 0.5201445      | 1.1955192             | torch.Size([2, 512, 64])         |
| 1007    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(2)     | input_0             | torch.float32 |         | 0.0000000         | 10.4859066       | 0.5201445      | 1.1955192             | torch.Size([2, 512, 64])         |
| 1007    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(2)     | output              | torch.float32 |         | 0.3160564         | 0.7176902        | 0.5201445      | 0.0153072             | torch.Size([2, 512, 1])          |
| 1008    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(2)             | input               | torch.float32 |         | 0.3160564         | 0.7176902        | 0.5201445      | 0.0153072             | torch.Size([2, 512, 1])          |
| 1008    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(2)             | output              | torch.float32 |         | 1.1803980         | 1.7787331        | 1.4244561      | 0.0432644             | torch.Size([2, 512, 1])          |
| 1009    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(2)           | input_0             | torch.float32 |         | -0.5193402        | 3.2381949        | -0.0000000     | 0.5201525             | torch.Size([2, 512, 64])         |
| 1009    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(2)           | input_1             | torch.float32 |         | 1.1803980         | 1.7787331        | 1.4244561      | 0.0432644             | torch.Size([2, 512, 1])          |
| 1009    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(2)           | output              | torch.float32 |         | -0.6834647        | 4.1584725        | -0.0000000     | 0.9999945             | torch.Size([2, 512, 64])         |
| 1010    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(2)      | input               | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1010    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(2)      | output              | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1011    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(2)        | input_0             | torch.float32 |         | -0.6834647        | 4.1584725        | -0.0000000     | 0.9999945             | torch.Size([2, 512, 64])         |
| 1011    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(2)        | input_1             | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1011    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(2)        | output              | torch.float32 |         | -0.7856674        | 4.2997313        | 0.0066814      | 1.0042310             | torch.Size([2, 512, 64])         |
| 1012    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(2)        | input               | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1012    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(2)        | output              | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1013    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(2)          | input_0             | torch.float32 |         | -0.7856674        | 4.2997313        | 0.0066814      | 1.0042310             | torch.Size([2, 512, 64])         |
| 1013    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(2)          | input_1             | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1013    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(2)          | output              | torch.float32 |         | -0.7615004        | 4.2853608        | 0.0199642      | 0.9832027             | torch.Size([2, 512, 64])         |
| 1014    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(2)                   | input               | torch.float32 |         | -0.7615004        | 4.2853608        | 0.0199642      | 0.9832027             | torch.Size([2, 512, 64])         |
| 1014    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(2)                   | weight              | torch.float32 |         | -0.5701389        | 0.3477888        | 0.0006721      | 0.0085883             | torch.Size([64, 64])             |
| 1014    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(2)                   | bias                | torch.float32 |         | -0.1677032        | 0.1709885        | -0.0237130     | 0.0070098             | torch.Size([64])                 |
| 1014    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(2)                   | output              | torch.float32 |         | -4.7947373        | 7.2182360        | -0.5174260     | 1.8341818             | torch.Size([2, 512, 64])         |
| 1015    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(2)                  | input               | torch.float32 |         | 0.0000000         | 7.2182360        | 0.2520956      | 0.6961232             | torch.Size([2, 512, 64])         |
| 1015    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(2)                  | output              | torch.float32 |         | 0.0000000         | 7.2182360        | 0.2520956      | 0.6961232             | torch.Size([2, 512, 64])         |
| 1016    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(2)  | input_0             | torch.float32 |         | 0.0000000         | 7.2182360        | 0.2520956      | 0.6961232             | torch.Size([2, 512, 64])         |
| 1016    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(2)  | output              | torch.float32 |         | 0.2015724         | 0.3380481        | 0.2520956      | 0.0015342             | torch.Size([2, 512, 1])          |
| 1017    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(2)              | input_0             | torch.float32 |         | 0.0000000         | 7.2182360        | 0.2520956      | 0.6961232             | torch.Size([2, 512, 64])         |
| 1017    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(2)              | input_1             | torch.float32 |         | 0.2015724         | 0.3380481        | 0.2520956      | 0.0015342             | torch.Size([2, 512, 1])          |
| 1017    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(2)              | output              | torch.float32 |         | -0.3380481        | 7.0129132        | 0.0000000      | 0.6945906             | torch.Size([2, 512, 64])         |
| 1018    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(2)              | input_0             | torch.float32 |         | -0.3380481        | 7.0129132        | 0.0000000      | 0.6945906             | torch.Size([2, 512, 64])         |
| 1018    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(2)              | input_1             | torch.float32 |         | -0.3380481        | 7.0129132        | 0.0000000      | 0.6945906             | torch.Size([2, 512, 64])         |
| 1018    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(2)              | output              | torch.float32 |         | 0.0000000         | 49.1809502       | 0.6945799      | 21.5573273            | torch.Size([2, 512, 64])         |
| 1019    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(2)    | input_0             | torch.float32 |         | 0.0000000         | 49.1809502       | 0.6945799      | 21.5573273            | torch.Size([2, 512, 64])         |
| 1019    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(2)    | output              | torch.float32 |         | 0.4780299         | 0.8287762        | 0.6945799      | 0.0128730             | torch.Size([2, 512, 1])          |
| 1020    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(2)            | input               | torch.float32 |         | 0.4780299         | 0.8287762        | 0.6945799      | 0.0128730             | torch.Size([2, 512, 1])          |
| 1020    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(2)            | output              | torch.float32 |         | 1.0984461         | 1.4463319        | 1.2134883      | 0.0120976             | torch.Size([2, 512, 1])          |
| 1021    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(2)          | input_0             | torch.float32 |         | -0.3380481        | 7.0129132        | 0.0000000      | 0.6945906             | torch.Size([2, 512, 64])         |
| 1021    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(2)          | input_1             | torch.float32 |         | 1.0984461         | 1.4463319        | 1.2134883      | 0.0120976             | torch.Size([2, 512, 1])          |
| 1021    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(2)          | output              | torch.float32 |         | -0.4744893        | 7.7586498        | -0.0000000     | 1.0000004             | torch.Size([2, 512, 64])         |
| 1022    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(2)     | input               | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1022    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(2)     | output              | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1023    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(2)       | input_0             | torch.float32 |         | -0.4744893        | 7.7586498        | -0.0000000     | 1.0000004             | torch.Size([2, 512, 64])         |
| 1023    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(2)       | input_1             | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1023    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(2)       | output              | torch.float32 |         | -0.6085324        | 5.8807788        | -0.0328532     | 0.6982971             | torch.Size([2, 512, 64])         |
| 1024    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(2)       | input               | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1024    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(2)       | output              | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1025    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(2)         | input_0             | torch.float32 |         | -0.6085324        | 5.8807788        | -0.0328532     | 0.6982971             | torch.Size([2, 512, 64])         |
| 1025    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(2)         | input_1             | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1025    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(2)         | output              | torch.float32 |         | -0.6047576        | 5.7884722        | 0.0571521      | 0.6102120             | torch.Size([2, 512, 64])         |
| 1026    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_0             | torch.float32 |         | -0.8396088        | 7.8619442        | 0.0729111      | 0.8689175             | torch.Size([2, 512, 128])        |
| 1026    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_1             | torch.float32 |         | -1.7287153        | 4.4025292        | 0.0043439      | 1.3084153             | torch.Size([2, 512, 32])         |
| 1026    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_2             | torch.float32 |         | -0.8190021        | 3.9102628        | 0.0085585      | 0.7140862             | torch.Size([2, 512, 32])         |
| 1026    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | input_3             | torch.float32 |         | -0.6047576        | 5.7884722        | 0.0571521      | 0.6102120             | torch.Size([2, 512, 64])         |
| 1026    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(2)                        | output              | torch.float32 |         | -1.7287153        | 7.8619442        | 0.0523564      | 0.8405592             | torch.Size([2, 512, 256])        |
| 1027    | torch.nn.modules.linear.Linear                                                    | head.fc_before(2)                                 | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 1027    | torch.nn.modules.linear.Linear                                                    | head.fc_before(2)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 1027    | torch.nn.modules.linear.Linear                                                    | head.fc_before(2)                                 | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 1028    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.7.query_cat                           | input_0             | torch.float32 |         | -5.6735950        | 3.3347695        | 0.0066565      | 0.6038858             | torch.Size([2, 512, 256])        |
| 1028    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.7.query_cat                           | input_1             | torch.float32 |         | -1.7287153        | 7.8619442        | 0.0523564      | 0.8405592             | torch.Size([2, 512, 256])        |
| 1028    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.7.query_cat                           | output              | torch.float32 |         | -5.6735950        | 7.8619442        | 0.0295065      | 0.7227433             | torch.Size([2, 512, 512])        |
| 1029    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.7.key_cat                             | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 1029    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.7.key_cat                             | input_1             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0508909      | 0.8514420             | torch.Size([2, 256, 256])        |
| 1029    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.7.key_cat                             | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 1030    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -5.6735950        | 7.8619442        | 0.0295065      | 0.7227433             | torch.Size([2, 512, 512])        |
| 1030    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -5.6735950        | 7.8619442        | 0.0295065      | 0.7227433             | torch.Size([512, 2, 512])        |
| 1031    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 1031    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1032    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 1032    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1033    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | input_0             | torch.float32 |         | -5.6735950        | 7.8619442        | 0.0295065      | 0.7227433             | torch.Size([512, 2, 512])        |
| 1033    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | output              | torch.float32 |         | -5.6735950        | 7.8619442        | 0.0295065      | 0.7227433             | torch.Size([512, 2, 512])        |
| 1034    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1034    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1035    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1035    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1036    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.q_proj                         | input               | torch.float32 |         | -5.6735950        | 7.8619442        | 0.0295065      | 0.7227433             | torch.Size([512, 2, 512])        |
| 1036    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.q_proj                         | weight              | torch.float32 |         | -0.2652678        | 0.2628567        | -0.0000400     | 0.0033250             | torch.Size([512, 512])           |
| 1036    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.q_proj                         | bias                | torch.float32 |         | -0.1143946        | 0.1122871        | 0.0018811      | 0.0010206             | torch.Size([512])                |
| 1036    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.q_proj                         | output              | torch.float32 |         | -14.4300871       | 12.6032553       | 0.1165267      | 11.5794983            | torch.Size([512, 2, 512])        |
| 1037    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.k_proj                         | input               | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1037    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.k_proj                         | weight              | torch.float32 |         | -0.2754143        | 0.2652588        | 0.0001362      | 0.0034943             | torch.Size([512, 512])           |
| 1037    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.k_proj                         | bias                | torch.float32 |         | -0.0046830        | 0.0034708        | 0.0000401      | 0.0000013             | torch.Size([512])                |
| 1037    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.k_proj                         | output              | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([256, 2, 512])        |
| 1038    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.v_proj                         | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1038    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.v_proj                         | weight              | torch.float32 |         | -0.1505703        | 0.1412487        | 0.0000714      | 0.0010024             | torch.Size([512, 512])           |
| 1038    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.v_proj                         | bias                | torch.float32 |         | -0.0650689        | 0.0530252        | 0.0005504      | 0.0003445             | torch.Size([512])                |
| 1038    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.v_proj                         | output              | torch.float32 |         | -0.0650689        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([256, 2, 512])        |
| 1039    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | input_0             | torch.float32 |         | -14.4300871       | 12.6032553       | 0.1165267      | 11.5794983            | torch.Size([512, 2, 512])        |
| 1039    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | output              | torch.float32 |         | -14.4300871       | 12.6032553       | 0.1165267      | 11.5794983            | torch.Size([512, 16, 64])        |
| 1040    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -14.4300871       | 12.6032553       | 0.1165267      | 11.5794983            | torch.Size([512, 16, 64])        |
| 1040    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -14.4300871       | 12.6032553       | 0.1165267      | 11.5794983            | torch.Size([16, 512, 64])        |
| 1041    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | input_0             | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([256, 2, 512])        |
| 1041    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | output              | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([256, 16, 64])        |
| 1042    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([256, 16, 64])        |
| 1042    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([16, 256, 64])        |
| 1043    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | input_0             | torch.float32 |         | -0.0650689        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([256, 2, 512])        |
| 1043    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | output              | torch.float32 |         | -0.0650689        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([256, 16, 64])        |
| 1044    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -0.0650689        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([256, 16, 64])        |
| 1044    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -0.0650689        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([16, 256, 64])        |
| 1045    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.7.attn.q_scale_mul                    | input_0             | torch.float32 |         | -14.4300871       | 12.6032553       | 0.1165267      | 11.5794983            | torch.Size([16, 512, 64])        |
| 1045    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.7.attn.q_scale_mul                    | output              | torch.float32 |         | -1.8037609        | 1.5754069        | 0.0145658      | 0.1809297             | torch.Size([16, 512, 64])        |
| 1046    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([16, 256, 64])        |
| 1046    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([16, 64, 256])        |
| 1047    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.7.attn.matmul                         | input_0             | torch.float32 |         | -1.8037609        | 1.5754069        | 0.0145658      | 0.1809297             | torch.Size([16, 512, 64])        |
| 1047    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.7.attn.matmul                         | input_1             | torch.float32 |         | -3.9849393        | 5.0081239        | 0.0506301      | 2.6727066             | torch.Size([16, 64, 256])        |
| 1047    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.7.attn.matmul                         | output              | torch.float32 |         | -87.9335327       | 36.2919083       | -10.9927959    | 567.9937134           | torch.Size([16, 512, 256])       |
| 1048    | torch.Tensor.max                                                                  | head.layers.7.attn.softmax                        | input               | torch.float32 |         | -87.9335327       | 36.2919083       | -10.9927959    | 567.9937134           | torch.Size([16, 512, 256])       |
| 1048    | torch.Tensor.max                                                                  | head.layers.7.attn.softmax                        | output_0            | torch.float32 |         | -87.9335327       | 36.2919083       | -10.9927969    | 568.0628052           | torch.Size([16, 512, 1])         |
| 1048    | torch.Tensor.max                                                                  | head.layers.7.attn.softmax                        | output_1            | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1049    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.7.attn.softmax.sub                    | input_0             | torch.float32 |         | -87.9335327       | 36.2919083       | -10.9927959    | 567.9937134           | torch.Size([16, 512, 256])       |
| 1049    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.7.attn.softmax.sub                    | input_1             | torch.float32 |         | -87.9335327       | 36.2919083       | -10.9927969    | 568.0628052           | torch.Size([16, 512, 1])         |
| 1049    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.7.attn.softmax.sub                    | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1050    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.7.attn.softmax.exp                    | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1050    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.7.attn.softmax.exp                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1051    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.7.attn.softmax.sum                    | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1051    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.7.attn.softmax.sum                    | output              | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 1052    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.7.attn.softmax.reciprocal             | input               | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 1052    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.7.attn.softmax.reciprocal             | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1053    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.7.attn.softmax.mul                    | input_0             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1053    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.7.attn.softmax.mul                    | input_1             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1053    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.7.attn.softmax.mul                    | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1054    | torch.nn.modules.dropout.Dropout                                                  | head.layers.7.attn.attention_drop                 | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1054    | torch.nn.modules.dropout.Dropout                                                  | head.layers.7.attn.attention_drop                 | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1055    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.7.attn.attn_matmul                    | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1055    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.7.attn.attn_matmul                    | input_1             | torch.float32 |         | -0.0650689        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([16, 256, 64])        |
| 1055    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.7.attn.attn_matmul                    | output              | torch.float32 |         | -0.0650690        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([16, 512, 64])        |
| 1056    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -0.0650690        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([16, 512, 64])        |
| 1056    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -0.0650690        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([512, 16, 64])        |
| 1057    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | input_0             | torch.float32 |         | -0.0650690        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([512, 16, 64])        |
| 1057    | torch.Tensor.reshape                                                              | head.layers.7.attn                                | output              | torch.float32 |         | -0.0650690        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([512, 2, 512])        |
| 1058    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.out_proj                       | input               | torch.float32 |         | -0.0650690        | 0.0530252        | 0.0005504      | 0.0003439             | torch.Size([512, 2, 512])        |
| 1058    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.out_proj                       | weight              | torch.float32 |         | -0.1888028        | 0.1700685        | 0.0000971      | 0.0020714             | torch.Size([512, 512])           |
| 1058    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.out_proj                       | bias                | torch.float32 |         | -0.2538213        | 0.2903754        | 0.0073539      | 0.0048732             | torch.Size([512])                |
| 1058    | torch.nn.modules.linear.Linear                                                    | head.layers.7.attn.out_proj                       | output              | torch.float32 |         | -0.4826050        | 0.5067946        | 0.0107099      | 0.0129332             | torch.Size([512, 2, 512])        |
| 1059    | torch.Tensor.view                                                                 | head.layers.7.attn                                | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1059    | torch.Tensor.view                                                                 | head.layers.7.attn                                | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 1060    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.7.attn.attn_weights_mean              | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 1060    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.7.attn.attn_weights_mean              | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 512, 256])        |
| 1061    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | input_0             | torch.float32 |         | -0.4826050        | 0.5067946        | 0.0107099      | 0.0129332             | torch.Size([512, 2, 512])        |
| 1061    | torch.Tensor.transpose                                                            | head.layers.7.attn                                | output              | torch.float32 |         | -0.4826050        | 0.5067946        | 0.0107099      | 0.0129332             | torch.Size([2, 512, 512])        |
| 1062    | torch.nn.modules.dropout.Dropout                                                  | head.layers.7.dropout                             | input               | torch.float32 |         | -0.4826050        | 0.5067946        | 0.0107099      | 0.0129332             | torch.Size([2, 512, 512])        |
| 1062    | torch.nn.modules.dropout.Dropout                                                  | head.layers.7.dropout                             | output              | torch.float32 |         | -0.4826050        | 0.5067946        | 0.0107099      | 0.0129332             | torch.Size([2, 512, 512])        |
| 1063    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.7.add                                 | input_0             | torch.float32 |         | -5.6735950        | 7.8619442        | 0.0295065      | 0.7227433             | torch.Size([2, 512, 512])        |
| 1063    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.7.add                                 | input_1             | torch.float32 |         | -0.4826050        | 0.5067946        | 0.0107099      | 0.0129332             | torch.Size([2, 512, 512])        |
| 1063    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.7.add                                 | output              | torch.float32 |         | -5.2450132        | 7.6921773        | 0.0402164      | 0.6615818             | torch.Size([2, 512, 512])        |
| 1064    | torch.nn.modules.linear.Linear                                                    | head.fc_after(2)                                  | input               | torch.float32 |         | -5.2450132        | 7.6921773        | 0.0402164      | 0.6615818             | torch.Size([2, 512, 512])        |
| 1064    | torch.nn.modules.linear.Linear                                                    | head.fc_after(2)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 1064    | torch.nn.modules.linear.Linear                                                    | head.fc_after(2)                                  | output              | torch.float32 |         | -5.3732157        | 7.4817314        | -0.0071356     | 0.7321773             | torch.Size([2, 512, 256])        |
| 1065    | torch.nn.modules.linear.Linear                                                    | head.fc_before(3)                                 | input               | torch.float32 |         | -5.3732157        | 7.4817314        | -0.0071356     | 0.7321773             | torch.Size([2, 512, 256])        |
| 1065    | torch.nn.modules.linear.Linear                                                    | head.fc_before(3)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 1065    | torch.nn.modules.linear.Linear                                                    | head.fc_before(3)                                 | output              | torch.float32 |         | -3.4635470        | 3.3912079        | 0.0048847      | 0.0464564             | torch.Size([2, 512, 512])        |
| 1066    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.8.query_cat                           | input_0             | torch.float32 |         | -5.3732157        | 7.4817314        | -0.0071356     | 0.7321773             | torch.Size([2, 512, 256])        |
| 1066    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.8.query_cat                           | input_1             | torch.float32 |         | -1.7287153        | 7.8619442        | 0.0523564      | 0.8405592             | torch.Size([2, 512, 256])        |
| 1066    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.8.query_cat                           | output              | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([2, 512, 512])        |
| 1067    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.8.key_cat                             | input_0             | torch.float32 |         | -5.3732157        | 7.4817314        | -0.0071356     | 0.7321773             | torch.Size([2, 512, 256])        |
| 1067    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.8.key_cat                             | input_1             | torch.float32 |         | -1.7287153        | 7.8619442        | 0.0523564      | 0.8405592             | torch.Size([2, 512, 256])        |
| 1067    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.8.key_cat                             | output              | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([2, 512, 512])        |
| 1068    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([2, 512, 512])        |
| 1068    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1069    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([2, 512, 512])        |
| 1069    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1070    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -3.4635470        | 3.3912079        | 0.0048847      | 0.0464564             | torch.Size([2, 512, 512])        |
| 1070    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -3.4635470        | 3.3912079        | 0.0048847      | 0.0464564             | torch.Size([512, 2, 512])        |
| 1071    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | input_0             | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1071    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | output              | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1072    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | input_0             | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1072    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | output              | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1073    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | input_0             | torch.float32 |         | -3.4635470        | 3.3912079        | 0.0048847      | 0.0464564             | torch.Size([512, 2, 512])        |
| 1073    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | output              | torch.float32 |         | -3.4635470        | 3.3912079        | 0.0048847      | 0.0464564             | torch.Size([512, 2, 512])        |
| 1074    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.q_proj                         | input               | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1074    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.q_proj                         | weight              | torch.float32 |         | -0.4437911        | 0.3668911        | -0.0000340     | 0.0026953             | torch.Size([512, 512])           |
| 1074    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.q_proj                         | bias                | torch.float32 |         | -0.1242760        | 0.1437089        | -0.0000070     | 0.0009090             | torch.Size([512])                |
| 1074    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.q_proj                         | output              | torch.float32 |         | -10.0747929       | 11.7903461       | 0.0230094      | 5.4723687             | torch.Size([512, 2, 512])        |
| 1075    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.k_proj                         | input               | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([512, 2, 512])        |
| 1075    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.k_proj                         | weight              | torch.float32 |         | -0.5519633        | 0.4679662        | -0.0001220     | 0.0030018             | torch.Size([512, 512])           |
| 1075    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.k_proj                         | bias                | torch.float32 |         | -0.1264462        | 0.1836499        | 0.0014424      | 0.0003835             | torch.Size([512])                |
| 1075    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.k_proj                         | output              | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([512, 2, 512])        |
| 1076    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.v_proj                         | input               | torch.float32 |         | -3.4635470        | 3.3912079        | 0.0048847      | 0.0464564             | torch.Size([512, 2, 512])        |
| 1076    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.v_proj                         | weight              | torch.float32 |         | -0.3248511        | 0.2856031        | -0.0000271     | 0.0013692             | torch.Size([512, 512])           |
| 1076    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.v_proj                         | bias                | torch.float32 |         | -0.2827679        | 0.3053629        | -0.0033159     | 0.0075418             | torch.Size([512])                |
| 1076    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.v_proj                         | output              | torch.float32 |         | -2.2810888        | 2.4208658        | -0.0052279     | 0.1121179             | torch.Size([512, 2, 512])        |
| 1077    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | input_0             | torch.float32 |         | -10.0747929       | 11.7903461       | 0.0230094      | 5.4723687             | torch.Size([512, 2, 512])        |
| 1077    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | output              | torch.float32 |         | -10.0747929       | 11.7903461       | 0.0230094      | 5.4723687             | torch.Size([512, 16, 64])        |
| 1078    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -10.0747929       | 11.7903461       | 0.0230094      | 5.4723687             | torch.Size([512, 16, 64])        |
| 1078    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -10.0747929       | 11.7903461       | 0.0230094      | 5.4723687             | torch.Size([16, 512, 64])        |
| 1079    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | input_0             | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([512, 2, 512])        |
| 1079    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | output              | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([512, 16, 64])        |
| 1080    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([512, 16, 64])        |
| 1080    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([16, 512, 64])        |
| 1081    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | input_0             | torch.float32 |         | -2.2810888        | 2.4208658        | -0.0052279     | 0.1121179             | torch.Size([512, 2, 512])        |
| 1081    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | output              | torch.float32 |         | -2.2810888        | 2.4208658        | -0.0052279     | 0.1121179             | torch.Size([512, 16, 64])        |
| 1082    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -2.2810888        | 2.4208658        | -0.0052279     | 0.1121179             | torch.Size([512, 16, 64])        |
| 1082    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -2.2810888        | 2.4208658        | -0.0052279     | 0.1121179             | torch.Size([16, 512, 64])        |
| 1083    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.8.attn.q_scale_mul                    | input_0             | torch.float32 |         | -10.0747929       | 11.7903461       | 0.0230094      | 5.4723687             | torch.Size([16, 512, 64])        |
| 1083    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.8.attn.q_scale_mul                    | output              | torch.float32 |         | -1.2593491        | 1.4737933        | 0.0028762      | 0.0855058             | torch.Size([16, 512, 64])        |
| 1084    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([16, 512, 64])        |
| 1084    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([16, 64, 512])        |
| 1085    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.8.attn.matmul                         | input_0             | torch.float32 |         | -1.2593491        | 1.4737933        | 0.0028762      | 0.0855058             | torch.Size([16, 512, 64])        |
| 1085    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.8.attn.matmul                         | input_1             | torch.float32 |         | -12.6947527       | 11.4442806       | -0.0257528     | 5.1424346             | torch.Size([16, 64, 512])        |
| 1085    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.8.attn.matmul                         | output              | torch.float32 |         | -124.8553848      | 143.1747131      | -0.9176972     | 336.7198792           | torch.Size([16, 512, 512])       |
| 1086    | torch.Tensor.max                                                                  | head.layers.8.attn.softmax                        | input               | torch.float32 |         | -124.8553848      | 143.1747131      | -0.9176972     | 336.7198792           | torch.Size([16, 512, 512])       |
| 1086    | torch.Tensor.max                                                                  | head.layers.8.attn.softmax                        | output_0            | torch.float32 |         | 5.5127330         | 143.1747131      | 35.4785919     | 602.4503784           | torch.Size([16, 512, 1])         |
| 1086    | torch.Tensor.max                                                                  | head.layers.8.attn.softmax                        | output_1            | torch.int64   |         | 128.0000000       | 510.0000000      | 344.5427246    | 12806.9794922         | torch.Size([16, 512, 1])         |
| 1087    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.8.attn.softmax.sub                    | input_0             | torch.float32 |         | -124.8553848      | 143.1747131      | -0.9176972     | 336.7198792           | torch.Size([16, 512, 512])       |
| 1087    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.8.attn.softmax.sub                    | input_1             | torch.float32 |         | 5.5127330         | 143.1747131      | 35.4785919     | 602.4503784           | torch.Size([16, 512, 1])         |
| 1087    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.8.attn.softmax.sub                    | output              | torch.float32 |         | -257.5911560      | 0.0000000        | -36.3962936    | 1033.5111084          | torch.Size([16, 512, 512])       |
| 1088    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.8.attn.softmax.exp                    | input               | torch.float32 |         | -257.5911560      | 0.0000000        | -36.3962936    | 1033.5111084          | torch.Size([16, 512, 512])       |
| 1088    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.8.attn.softmax.exp                    | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0091509      | 0.0049266             | torch.Size([16, 512, 512])       |
| 1089    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.8.attn.softmax.sum                    | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0091509      | 0.0049266             | torch.Size([16, 512, 512])       |
| 1089    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.8.attn.softmax.sum                    | output              | torch.float32 |         | 1.0007937         | 31.9052773       | 4.6852551      | 13.3097258            | torch.Size([16, 512, 1])         |
| 1090    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.8.attn.softmax.reciprocal             | input               | torch.float32 |         | 1.0007937         | 31.9052773       | 4.6852551      | 13.3097258            | torch.Size([16, 512, 1])         |
| 1090    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.8.attn.softmax.reciprocal             | output              | torch.float32 |         | 0.0313428         | 0.9992070        | 0.3647972      | 0.0636163             | torch.Size([16, 512, 1])         |
| 1091    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.8.attn.softmax.mul                    | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0091509      | 0.0049266             | torch.Size([16, 512, 512])       |
| 1091    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.8.attn.softmax.mul                    | input_1             | torch.float32 |         | 0.0313428         | 0.9992070        | 0.3647972      | 0.0636163             | torch.Size([16, 512, 1])         |
| 1091    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.8.attn.softmax.mul                    | output              | torch.float32 |         | 0.0000000         | 0.9992070        | 0.0019531      | 0.0005018             | torch.Size([16, 512, 512])       |
| 1092    | torch.nn.modules.dropout.Dropout                                                  | head.layers.8.attn.attention_drop                 | input               | torch.float32 |         | 0.0000000         | 0.9992070        | 0.0019531      | 0.0005018             | torch.Size([16, 512, 512])       |
| 1092    | torch.nn.modules.dropout.Dropout                                                  | head.layers.8.attn.attention_drop                 | output              | torch.float32 |         | 0.0000000         | 0.9992070        | 0.0019531      | 0.0005018             | torch.Size([16, 512, 512])       |
| 1093    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.8.attn.attn_matmul                    | input_0             | torch.float32 |         | 0.0000000         | 0.9992070        | 0.0019531      | 0.0005018             | torch.Size([16, 512, 512])       |
| 1093    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.8.attn.attn_matmul                    | input_1             | torch.float32 |         | -2.2810888        | 2.4208658        | -0.0052279     | 0.1121179             | torch.Size([16, 512, 64])        |
| 1093    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.8.attn.attn_matmul                    | output              | torch.float32 |         | -1.8607081        | 1.9984264        | -0.0094888     | 0.0706403             | torch.Size([16, 512, 64])        |
| 1094    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -1.8607081        | 1.9984264        | -0.0094888     | 0.0706403             | torch.Size([16, 512, 64])        |
| 1094    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -1.8607081        | 1.9984264        | -0.0094888     | 0.0706403             | torch.Size([512, 16, 64])        |
| 1095    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | input_0             | torch.float32 |         | -1.8607081        | 1.9984264        | -0.0094888     | 0.0706403             | torch.Size([512, 16, 64])        |
| 1095    | torch.Tensor.reshape                                                              | head.layers.8.attn                                | output              | torch.float32 |         | -1.8607081        | 1.9984264        | -0.0094888     | 0.0706403             | torch.Size([512, 2, 512])        |
| 1096    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.out_proj                       | input               | torch.float32 |         | -1.8607081        | 1.9984264        | -0.0094888     | 0.0706403             | torch.Size([512, 2, 512])        |
| 1096    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.out_proj                       | weight              | torch.float32 |         | -0.2233234        | 0.2726021        | -0.0000586     | 0.0024737             | torch.Size([512, 512])           |
| 1096    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.out_proj                       | bias                | torch.float32 |         | -0.3740546        | 0.4565917        | -0.0073158     | 0.0213863             | torch.Size([512])                |
| 1096    | torch.nn.modules.linear.Linear                                                    | head.layers.8.attn.out_proj                       | output              | torch.float32 |         | -2.4883699        | 2.3822906        | 0.0213312      | 0.3575344             | torch.Size([512, 2, 512])        |
| 1097    | torch.Tensor.view                                                                 | head.layers.8.attn                                | input_0             | torch.float32 |         | 0.0000000         | 0.9992070        | 0.0019531      | 0.0005018             | torch.Size([16, 512, 512])       |
| 1097    | torch.Tensor.view                                                                 | head.layers.8.attn                                | output              | torch.float32 |         | 0.0000000         | 0.9992070        | 0.0019531      | 0.0005018             | torch.Size([2, 8, 512, 512])     |
| 1098    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.8.attn.attn_weights_mean              | input               | torch.float32 |         | 0.0000000         | 0.9992070        | 0.0019531      | 0.0005018             | torch.Size([2, 8, 512, 512])     |
| 1098    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.8.attn.attn_weights_mean              | output              | torch.float32 |         | 0.0000000         | 0.1629485        | 0.0019531      | 0.0000724             | torch.Size([2, 512, 512])        |
| 1099    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | input_0             | torch.float32 |         | -2.4883699        | 2.3822906        | 0.0213312      | 0.3575344             | torch.Size([512, 2, 512])        |
| 1099    | torch.Tensor.transpose                                                            | head.layers.8.attn                                | output              | torch.float32 |         | -2.4883699        | 2.3822906        | 0.0213312      | 0.3575344             | torch.Size([2, 512, 512])        |
| 1100    | torch.nn.modules.dropout.Dropout                                                  | head.layers.8.dropout                             | input               | torch.float32 |         | -2.4883699        | 2.3822906        | 0.0213312      | 0.3575344             | torch.Size([2, 512, 512])        |
| 1100    | torch.nn.modules.dropout.Dropout                                                  | head.layers.8.dropout                             | output              | torch.float32 |         | -2.4883699        | 2.3822906        | 0.0213312      | 0.3575344             | torch.Size([2, 512, 512])        |
| 1101    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.8.add                                 | input_0             | torch.float32 |         | -5.3732157        | 7.8619442        | 0.0226104      | 0.7872516             | torch.Size([2, 512, 512])        |
| 1101    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.8.add                                 | input_1             | torch.float32 |         | -2.4883699        | 2.3822906        | 0.0213312      | 0.3575344             | torch.Size([2, 512, 512])        |
| 1101    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.8.add                                 | output              | torch.float32 |         | -6.2537374        | 7.8523984        | 0.0439416      | 1.0412253             | torch.Size([2, 512, 512])        |
| 1102    | torch.nn.modules.linear.Linear                                                    | head.fc_after(3)                                  | input               | torch.float32 |         | -6.2537374        | 7.8523984        | 0.0439416      | 1.0412253             | torch.Size([2, 512, 512])        |
| 1102    | torch.nn.modules.linear.Linear                                                    | head.fc_after(3)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 1102    | torch.nn.modules.linear.Linear                                                    | head.fc_after(3)                                  | output              | torch.float32 |         | -50.4860001       | 38.5411072       | 0.0205872      | 18.1152439            | torch.Size([2, 512, 256])        |
| 1103    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.9.input_mean.mean                     | input_0             | torch.float32 |         | -50.4860001       | 38.5411072       | 0.0205872      | 18.1152439            | torch.Size([2, 512, 256])        |
| 1103    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.9.input_mean.mean                     | output              | torch.float32 |         | -0.0443122        | 0.0741557        | 0.0205872      | 0.0006852             | torch.Size([2, 512, 1])          |
| 1104    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.9.sub                                 | input_0             | torch.float32 |         | -50.4860001       | 38.5411072       | 0.0205872      | 18.1152439            | torch.Size([2, 512, 256])        |
| 1104    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.9.sub                                 | input_1             | torch.float32 |         | -0.0443122        | 0.0741557        | 0.0205872      | 0.0006852             | torch.Size([2, 512, 1])          |
| 1104    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.9.sub                                 | output              | torch.float32 |         | -50.5580940       | 38.5169106       | 0.0000000      | 18.1145592            | torch.Size([2, 512, 256])        |
| 1105    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.mul                                 | input_0             | torch.float32 |         | -50.5580940       | 38.5169106       | 0.0000000      | 18.1145592            | torch.Size([2, 512, 256])        |
| 1105    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.mul                                 | input_1             | torch.float32 |         | -50.5580940       | 38.5169106       | 0.0000000      | 18.1145592            | torch.Size([2, 512, 256])        |
| 1105    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.mul                                 | output              | torch.float32 |         | 0.0000000         | 2556.1208496     | 18.1144924     | 11928.6113281         | torch.Size([2, 512, 256])        |
| 1106    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.9.var_mean.mean                       | input_0             | torch.float32 |         | 0.0000000         | 2556.1208496     | 18.1144924     | 11928.6113281         | torch.Size([2, 512, 256])        |
| 1106    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.9.var_mean.mean                       | output              | torch.float32 |         | 6.8982115         | 39.1194839       | 18.1144924     | 45.7311363            | torch.Size([2, 512, 1])          |
| 1107    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.9.rsqrt                               | input               | torch.float32 |         | 6.8982115         | 39.1194839       | 18.1144924     | 45.7311363            | torch.Size([2, 512, 1])          |
| 1107    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.9.rsqrt                               | output              | torch.float32 |         | 0.1598834         | 0.3807426        | 0.2475946      | 0.0021169             | torch.Size([2, 512, 1])          |
| 1108    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.out_mul                             | input_0             | torch.float32 |         | -50.5580940       | 38.5169106       | 0.0000000      | 18.1145592            | torch.Size([2, 512, 256])        |
| 1108    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.out_mul                             | input_1             | torch.float32 |         | 0.1598834         | 0.3807426        | 0.2475946      | 0.0021169             | torch.Size([2, 512, 1])          |
| 1108    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.out_mul                             | output              | torch.float32 |         | -8.4250402        | 6.3777671        | 0.0000000      | 1.0000032             | torch.Size([2, 512, 256])        |
| 1109    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.9.weight_quant                        | input               | torch.float32 |         | 0.7484364         | 1.0673635        | 0.8810046      | 0.0025054             | torch.Size([256])                |
| 1109    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.9.weight_quant                        | output              | torch.float32 |         | 0.7484364         | 1.0673635        | 0.8810046      | 0.0025054             | torch.Size([256])                |
| 1110    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.weight_mul                          | input_0             | torch.float32 |         | -8.4250402        | 6.3777671        | 0.0000000      | 1.0000032             | torch.Size([2, 512, 256])        |
| 1110    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.weight_mul                          | input_1             | torch.float32 |         | 0.7484364         | 1.0673635        | 0.8810046      | 0.0025054             | torch.Size([256])                |
| 1110    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.9.weight_mul                          | output              | torch.float32 |         | -7.7149677        | 5.7010217        | -0.0002851     | 0.8012913             | torch.Size([2, 512, 256])        |
| 1111    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.9.bias_quant                          | input               | torch.float32 |         | -0.0912300        | 0.1098549        | -0.0018977     | 0.0007133             | torch.Size([256])                |
| 1111    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.9.bias_quant                          | output              | torch.float32 |         | -0.0912300        | 0.1098549        | -0.0018977     | 0.0007133             | torch.Size([256])                |
| 1112    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.9.bias_add                            | input_0             | torch.float32 |         | -7.7149677        | 5.7010217        | -0.0002851     | 0.8012913             | torch.Size([2, 512, 256])        |
| 1112    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.9.bias_add                            | input_1             | torch.float32 |         | -0.0912300        | 0.1098549        | -0.0018977     | 0.0007133             | torch.Size([256])                |
| 1112    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.9.bias_add                            | output              | torch.float32 |         | -7.6803398        | 5.6425967        | -0.0021828     | 0.7817355             | torch.Size([2, 512, 256])        |
| 1113    | torch.nn.modules.linear.Linear                                                    | head.layers.10.kps_generator.offset               | input               | torch.float32 |         | -7.6803398        | 5.6425967        | -0.0021828     | 0.7817355             | torch.Size([2, 512, 256])        |
| 1113    | torch.nn.modules.linear.Linear                                                    | head.layers.10.kps_generator.offset               | weight              | torch.float32 |         | -0.3201400        | 0.3177086        | 0.0014321      | 0.0068747             | torch.Size([24, 256])            |
| 1113    | torch.nn.modules.linear.Linear                                                    | head.layers.10.kps_generator.offset               | bias                | torch.float32 |         | -0.1534995        | 0.1723033        | 0.0028447      | 0.0042549             | torch.Size([24])                 |
| 1113    | torch.nn.modules.linear.Linear                                                    | head.layers.10.kps_generator.offset               | output              | torch.float32 |         | -14.2618208       | 16.6804752       | 0.2049131      | 17.0910740            | torch.Size([2, 512, 24])         |
| 1114    | torch.Tensor.view                                                                 | head.layers.10.kps_generator                      | input_0             | torch.float32 |         | -14.2618208       | 16.6804752       | 0.2049131      | 17.0910740            | torch.Size([2, 512, 24])         |
| 1114    | torch.Tensor.view                                                                 | head.layers.10.kps_generator                      | output              | torch.float32 |         | -14.2618208       | 16.6804752       | 0.2049131      | 17.0910740            | torch.Size([2, 512, 8, 3])       |
| 1115    | torch.Tensor.__getitem__                                                          | head.layers.10.kps_generator                      | input_0             | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 1115    | torch.Tensor.__getitem__                                                          | head.layers.10.kps_generator                      | output              | torch.float32 |         | -53.6162720       | 53.6826859       | 0.7275968      | 289.7366333           | torch.Size([2, 512, 1, 3])       |
| 1116    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.kps_generator.keypoints_add        | input_0             | torch.float32 |         | -14.2618208       | 16.6804752       | 0.2049131      | 17.0910740            | torch.Size([2, 512, 8, 3])       |
| 1116    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.kps_generator.keypoints_add        | input_1             | torch.float32 |         | -53.6162720       | 53.6826859       | 0.7275968      | 289.7366333           | torch.Size([2, 512, 1, 3])       |
| 1116    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.kps_generator.keypoints_add        | output              | torch.float32 |         | -63.8128052       | 66.9410400       | 0.9325099      | 305.8857117           | torch.Size([2, 512, 8, 3])       |
| 1117    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.weight_add                         | input_0             | torch.float32 |         | -7.6803398        | 5.6425967        | -0.0021828     | 0.7817355             | torch.Size([2, 512, 256])        |
| 1117    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.weight_add                         | input_1             | torch.float32 |         | -1.7287153        | 7.8619442        | 0.0523564      | 0.8405592             | torch.Size([2, 512, 256])        |
| 1117    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.weight_add                         | output              | torch.float32 |         | -8.0531569        | 8.3617945        | 0.0501736      | 1.5069473             | torch.Size([2, 512, 256])        |
| 1118    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 1118    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 1119    | torch.Tensor.reshape                                                              | head.layers.10                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 1119    | torch.Tensor.reshape                                                              | head.layers.10                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 1120    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.0                   | input               | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 1120    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.0                   | weight              | torch.float32 |         | -1.0164793        | 0.8352295        | 0.0021029      | 0.0230761             | torch.Size([256, 12])            |
| 1120    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.0                   | bias                | torch.float32 |         | -0.3216627        | 0.3002117        | 0.0078120      | 0.0275127             | torch.Size([256])                |
| 1120    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.0                   | output              | torch.float32 |         | -1.1068145        | 1.2416224        | 0.0096454      | 0.2003922             | torch.Size([2, 6, 256])          |
| 1121    | torch.nn.modules.activation.ReLU                                                  | head.layers.10.camera_encoder.1                   | input               | torch.float32 |         | 0.0000000         | 1.2416224        | 0.1933525      | 0.0690312             | torch.Size([2, 6, 256])          |
| 1121    | torch.nn.modules.activation.ReLU                                                  | head.layers.10.camera_encoder.1                   | output              | torch.float32 |         | 0.0000000         | 1.2416224        | 0.1933525      | 0.0690312             | torch.Size([2, 6, 256])          |
| 1122    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 1.2416224        | 0.1933525      | 0.0690312             | torch.Size([2, 6, 256])          |
| 1122    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.input_mean.mean   | output              | torch.float32 |         | 0.1363189         | 0.2159946        | 0.1933525      | 0.0007545             | torch.Size([2, 6, 1])            |
| 1123    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.2.sub               | input_0             | torch.float32 |         | 0.0000000         | 1.2416224        | 0.1933525      | 0.0690312             | torch.Size([2, 6, 256])          |
| 1123    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.2.sub               | input_1             | torch.float32 |         | 0.1363189         | 0.2159946        | 0.1933525      | 0.0007545             | torch.Size([2, 6, 1])            |
| 1123    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.2.sub               | output              | torch.float32 |         | -0.2159946        | 1.0334446        | 0.0000000      | 0.0683394             | torch.Size([2, 6, 256])          |
| 1124    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.mul               | input_0             | torch.float32 |         | -0.2159946        | 1.0334446        | 0.0000000      | 0.0683394             | torch.Size([2, 6, 256])          |
| 1124    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.mul               | input_1             | torch.float32 |         | -0.2159946        | 1.0334446        | 0.0000000      | 0.0683394             | torch.Size([2, 6, 256])          |
| 1124    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.mul               | output              | torch.float32 |         | 0.0000000         | 1.0680078        | 0.0683172      | 0.0137234             | torch.Size([2, 6, 256])          |
| 1125    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.var_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 1.0680078        | 0.0683172      | 0.0137234             | torch.Size([2, 6, 256])          |
| 1125    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.2.var_mean.mean     | output              | torch.float32 |         | 0.0257319         | 0.0882973        | 0.0683172      | 0.0004488             | torch.Size([2, 6, 1])            |
| 1126    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.10.camera_encoder.2.rsqrt             | input               | torch.float32 |         | 0.0257319         | 0.0882973        | 0.0683172      | 0.0004488             | torch.Size([2, 6, 1])            |
| 1126    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.10.camera_encoder.2.rsqrt             | output              | torch.float32 |         | 3.3651288         | 6.2327471        | 4.0499854      | 1.0156279             | torch.Size([2, 6, 1])            |
| 1127    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.out_mul           | input_0             | torch.float32 |         | -0.2159946        | 1.0334446        | 0.0000000      | 0.0683394             | torch.Size([2, 6, 256])          |
| 1127    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.out_mul           | input_1             | torch.float32 |         | 3.3651288         | 6.2327471        | 4.0499854      | 1.0156279             | torch.Size([2, 6, 1])            |
| 1127    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.out_mul           | output              | torch.float32 |         | -0.8496410        | 3.9160531        | 0.0000000      | 1.0001522             | torch.Size([2, 6, 256])          |
| 1128    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.2.weight_quant      | input               | torch.float32 |         | 0.7735876         | 1.1663378        | 0.9820545      | 0.0040344             | torch.Size([256])                |
| 1128    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.2.weight_quant      | output              | torch.float32 |         | 0.7735876         | 1.1663378        | 0.9820545      | 0.0040344             | torch.Size([256])                |
| 1129    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.weight_mul        | input_0             | torch.float32 |         | -0.8496410        | 3.9160531        | 0.0000000      | 1.0001522             | torch.Size([2, 6, 256])          |
| 1129    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.weight_mul        | input_1             | torch.float32 |         | 0.7735876         | 1.1663378        | 0.9820545      | 0.0040344             | torch.Size([256])                |
| 1129    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.2.weight_mul        | output              | torch.float32 |         | -0.9408408        | 3.8942459        | 0.0008105      | 0.9878499             | torch.Size([2, 6, 256])          |
| 1130    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.2.bias_quant        | input               | torch.float32 |         | -0.0987514        | 0.1280675        | 0.0000397      | 0.0013846             | torch.Size([256])                |
| 1130    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.2.bias_quant        | output              | torch.float32 |         | -0.0987514        | 0.1280675        | 0.0000397      | 0.0013846             | torch.Size([256])                |
| 1131    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.2.bias_add          | input_0             | torch.float32 |         | -0.9408408        | 3.8942459        | 0.0008105      | 0.9878499             | torch.Size([2, 6, 256])          |
| 1131    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.2.bias_add          | input_1             | torch.float32 |         | -0.0987514        | 0.1280675        | 0.0000397      | 0.0013846             | torch.Size([256])                |
| 1131    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.2.bias_add          | output              | torch.float32 |         | -1.0148071        | 3.9103243        | 0.0008502      | 0.9996728             | torch.Size([2, 6, 256])          |
| 1132    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.3                   | input               | torch.float32 |         | -1.0148071        | 3.9103243        | 0.0008502      | 0.9996728             | torch.Size([2, 6, 256])          |
| 1132    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.3                   | weight              | torch.float32 |         | -0.3692743        | 0.3998400        | -0.0000485     | 0.0051414             | torch.Size([256, 256])           |
| 1132    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.3                   | bias                | torch.float32 |         | -0.0814586        | 0.2724895        | -0.0004629     | 0.0023738             | torch.Size([256])                |
| 1132    | torch.nn.modules.linear.Linear                                                    | head.layers.10.camera_encoder.3                   | output              | torch.float32 |         | -7.5072227        | 47.0550957       | 0.0117645      | 33.5881996            | torch.Size([2, 6, 256])          |
| 1133    | torch.nn.modules.activation.ReLU                                                  | head.layers.10.camera_encoder.4                   | input               | torch.float32 |         | 0.0000000         | 47.0550957       | 1.5273552      | 27.3509617            | torch.Size([2, 6, 256])          |
| 1133    | torch.nn.modules.activation.ReLU                                                  | head.layers.10.camera_encoder.4                   | output              | torch.float32 |         | 0.0000000         | 47.0550957       | 1.5273552      | 27.3509617            | torch.Size([2, 6, 256])          |
| 1134    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 47.0550957       | 1.5273552      | 27.3509617            | torch.Size([2, 6, 256])          |
| 1134    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.input_mean.mean   | output              | torch.float32 |         | 1.4022747         | 1.6958935        | 1.5273552      | 0.0104743             | torch.Size([2, 6, 1])            |
| 1135    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.5.sub               | input_0             | torch.float32 |         | 0.0000000         | 47.0550957       | 1.5273552      | 27.3509617            | torch.Size([2, 6, 256])          |
| 1135    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.5.sub               | input_1             | torch.float32 |         | 1.4022747         | 1.6958935        | 1.5273552      | 0.0104743             | torch.Size([2, 6, 1])            |
| 1135    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.camera_encoder.5.sub               | output              | torch.float32 |         | -1.6958935        | 45.6465187       | 0.0000000      | 27.3413563            | torch.Size([2, 6, 256])          |
| 1136    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.mul               | input_0             | torch.float32 |         | -1.6958935        | 45.6465187       | 0.0000000      | 27.3413563            | torch.Size([2, 6, 256])          |
| 1136    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.mul               | input_1             | torch.float32 |         | -1.6958935        | 45.6465187       | 0.0000000      | 27.3413563            | torch.Size([2, 6, 256])          |
| 1136    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.mul               | output              | torch.float32 |         | 0.0000768         | 2083.6047363     | 27.3324547     | 33594.5859375         | torch.Size([2, 6, 256])          |
| 1137    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.var_mean.mean     | input_0             | torch.float32 |         | 0.0000768         | 2083.6047363     | 27.3324547     | 33594.5859375         | torch.Size([2, 6, 256])          |
| 1137    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.10.camera_encoder.5.var_mean.mean     | output              | torch.float32 |         | 25.0144634        | 28.2545090       | 27.3324547     | 0.9307840             | torch.Size([2, 6, 1])            |
| 1138    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.10.camera_encoder.5.rsqrt             | input               | torch.float32 |         | 25.0144634        | 28.2545090       | 27.3324547     | 0.9307840             | torch.Size([2, 6, 1])            |
| 1138    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.10.camera_encoder.5.rsqrt             | output              | torch.float32 |         | 0.1881291         | 0.1999421        | 0.1913611      | 0.0000122             | torch.Size([2, 6, 1])            |
| 1139    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.out_mul           | input_0             | torch.float32 |         | -1.6958935        | 45.6465187       | 0.0000000      | 27.3413563            | torch.Size([2, 6, 256])          |
| 1139    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.out_mul           | input_1             | torch.float32 |         | 0.1881291         | 0.1999421        | 0.1913611      | 0.0000122             | torch.Size([2, 6, 1])            |
| 1139    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.out_mul           | output              | torch.float32 |         | -0.3202046        | 8.7128887        | 0.0000000      | 1.0003252             | torch.Size([2, 6, 256])          |
| 1140    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.5.weight_quant      | input               | torch.float32 |         | 0.5887775         | 1.2592373        | 0.8845733      | 0.0137082             | torch.Size([256])                |
| 1140    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.5.weight_quant      | output              | torch.float32 |         | 0.5887775         | 1.2592373        | 0.8845733      | 0.0137082             | torch.Size([256])                |
| 1141    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.weight_mul        | input_0             | torch.float32 |         | -0.3202046        | 8.7128887        | 0.0000000      | 1.0003252             | torch.Size([2, 6, 256])          |
| 1141    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.weight_mul        | input_1             | torch.float32 |         | 0.5887775         | 1.2592373        | 0.8845733      | 0.0137082             | torch.Size([256])                |
| 1141    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.camera_encoder.5.weight_mul        | output              | torch.float32 |         | -0.4032136        | 8.3520155        | -0.0184260     | 0.6760476             | torch.Size([2, 6, 256])          |
| 1142    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.5.bias_quant        | input               | torch.float32 |         | -0.3856634        | 0.3310284        | 0.0403769      | 0.0131642             | torch.Size([256])                |
| 1142    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.camera_encoder.5.bias_quant        | output              | torch.float32 |         | -0.3856634        | 0.3310284        | 0.0403769      | 0.0131642             | torch.Size([256])                |
| 1143    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.5.bias_add          | input_0             | torch.float32 |         | -0.4032136        | 8.3520155        | -0.0184260     | 0.6760476             | torch.Size([2, 6, 256])          |
| 1143    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.5.bias_add          | input_1             | torch.float32 |         | -0.3856634        | 0.3310284        | 0.0403769      | 0.0131642             | torch.Size([256])                |
| 1143    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.camera_encoder.5.bias_add          | output              | torch.float32 |         | -0.7160327        | 8.3596153        | 0.0219509      | 0.6460814             | torch.Size([2, 6, 256])          |
| 1144    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -8.0531569        | 8.3617945        | 0.0501736      | 1.5069473             | torch.Size([2, 512, 256])        |
| 1144    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -8.0531569        | 8.3617945        | 0.0501736      | 1.5069473             | torch.Size([2, 512, 1, 256])     |
| 1145    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -0.7160327        | 8.3596153        | 0.0219509      | 0.6460814             | torch.Size([2, 6, 256])          |
| 1145    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -0.7160327        | 8.3596153        | 0.0219509      | 0.6460814             | torch.Size([2, 1, 6, 256])       |
| 1146    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.cam_add                            | input_0             | torch.float32 |         | -8.0531569        | 8.3617945        | 0.0501736      | 1.5069473             | torch.Size([2, 512, 1, 256])     |
| 1146    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.cam_add                            | input_1             | torch.float32 |         | -0.7160327        | 8.3596153        | 0.0219509      | 0.6460814             | torch.Size([2, 1, 6, 256])       |
| 1146    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.10.cam_add                            | output              | torch.float32 |         | -5.6417465        | 7.8269825        | 0.0721245      | 1.0922184             | torch.Size([2, 512, 6, 256])     |
| 1147    | torch.nn.modules.linear.Linear                                                    | head.layers.10.weights_fc                         | input               | torch.float32 |         | -5.6417465        | 7.8269825        | 0.0721245      | 1.0922184             | torch.Size([2, 512, 6, 256])     |
| 1147    | torch.nn.modules.linear.Linear                                                    | head.layers.10.weights_fc                         | weight              | torch.float32 |         | -0.3316146        | 0.2786153        | 0.0008751      | 0.0028934             | torch.Size([64, 256])            |
| 1147    | torch.nn.modules.linear.Linear                                                    | head.layers.10.weights_fc                         | bias                | torch.float32 |         | -0.0985109        | 0.1124940        | -0.0119324     | 0.0019689             | torch.Size([64])                 |
| 1147    | torch.nn.modules.linear.Linear                                                    | head.layers.10.weights_fc                         | output              | torch.float32 |         | -9.0777550        | 5.4471135        | -0.5447841     | 5.1134944             | torch.Size([2, 512, 6, 64])      |
| 1148    | torch.Tensor.reshape                                                              | head.layers.10                                    | input_0             | torch.float32 |         | -9.0777550        | 5.4471135        | -0.5447841     | 5.1134944             | torch.Size([2, 512, 6, 64])      |
| 1148    | torch.Tensor.reshape                                                              | head.layers.10                                    | output              | torch.float32 |         | -9.0777550        | 5.4471135        | -0.5447841     | 5.1134944             | torch.Size([2, 512, 48, 8])      |
| 1149    | torch.Tensor.max                                                                  | head.layers.10.weight_softmax                     | input               | torch.float32 |         | -9.0777550        | 5.4471135        | -0.5447841     | 5.1134944             | torch.Size([2, 512, 48, 8])      |
| 1149    | torch.Tensor.max                                                                  | head.layers.10.weight_softmax                     | output_0            | torch.float32 |         | 1.1401978         | 5.4471135        | 2.8656683      | 0.8118109             | torch.Size([2, 512, 1, 8])       |
| 1149    | torch.Tensor.max                                                                  | head.layers.10.weight_softmax                     | output_1            | torch.int64   |         | 6.0000000         | 47.0000000       | 29.2145996     | 145.3127594           | torch.Size([2, 512, 1, 8])       |
| 1150    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.weight_softmax.sub                 | input_0             | torch.float32 |         | -9.0777550        | 5.4471135        | -0.5447841     | 5.1134944             | torch.Size([2, 512, 48, 8])      |
| 1150    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.weight_softmax.sub                 | input_1             | torch.float32 |         | 1.1401978         | 5.4471135        | 2.8656683      | 0.8118109             | torch.Size([2, 512, 1, 8])       |
| 1150    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.10.weight_softmax.sub                 | output              | torch.float32 |         | -12.8213291       | 0.0000000        | -3.4104524     | 5.2173071             | torch.Size([2, 512, 48, 8])      |
| 1151    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.10.weight_softmax.exp                 | input               | torch.float32 |         | -12.8213291       | 0.0000000        | -3.4104524     | 5.2173071             | torch.Size([2, 512, 48, 8])      |
| 1151    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.10.weight_softmax.exp                 | output              | torch.float32 |         | 0.0000027         | 1.0000000        | 0.1843165      | 0.0804663             | torch.Size([2, 512, 48, 8])      |
| 1152    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.10.weight_softmax.sum                 | input               | torch.float32 |         | 0.0000027         | 1.0000000        | 0.1843165      | 0.0804663             | torch.Size([2, 512, 48, 8])      |
| 1152    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.10.weight_softmax.sum                 | output              | torch.float32 |         | 2.0451744         | 23.3063622       | 8.8471918      | 8.5889149             | torch.Size([2, 512, 1, 8])       |
| 1153    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.10.weight_softmax.reciprocal          | input               | torch.float32 |         | 2.0451744         | 23.3063622       | 8.8471918      | 8.5889149             | torch.Size([2, 512, 1, 8])       |
| 1153    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.10.weight_softmax.reciprocal          | output              | torch.float32 |         | 0.0429067         | 0.4889559        | 0.1302439      | 0.0038657             | torch.Size([2, 512, 1, 8])       |
| 1154    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.weight_softmax.mul                 | input_0             | torch.float32 |         | 0.0000027         | 1.0000000        | 0.1843165      | 0.0804663             | torch.Size([2, 512, 48, 8])      |
| 1154    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.weight_softmax.mul                 | input_1             | torch.float32 |         | 0.0429067         | 0.4889559        | 0.1302439      | 0.0038657             | torch.Size([2, 512, 1, 8])       |
| 1154    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.weight_softmax.mul                 | output              | torch.float32 |         | 0.0000005         | 0.4889559        | 0.0208333      | 0.0011968             | torch.Size([2, 512, 48, 8])      |
| 1155    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -63.8128052       | 66.9410400       | 0.9325099      | 305.8857117           | torch.Size([2, 512, 8, 3])       |
| 1155    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -63.8128052       | 60.2788124       | 0.8721940      | 343.9483337           | torch.Size([2, 512, 8, 1])       |
| 1156    | torch.ones_like                                                                   | head.layers.10                                    | input               | torch.float32 |         | -63.8128052       | 60.2788124       | 0.8721940      | 343.9483337           | torch.Size([2, 512, 8, 1])       |
| 1156    | torch.ones_like                                                                   | head.layers.10                                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1157    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.point_quant_stub                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1157    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.10.point_quant_stub                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1158    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.point_cat                          | input_0             | torch.float32 |         | -63.8128052       | 66.9410400       | 0.9325099      | 305.8857117           | torch.Size([2, 512, 8, 3])       |
| 1158    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.point_cat                          | input_1             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1158    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.point_cat                          | output              | torch.float32 |         | -63.8128052       | 66.9410400       | 0.9493825      | 229.4128113           | torch.Size([2, 512, 8, 4])       |
| 1159    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 1159    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1160    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -63.8128052       | 66.9410400       | 0.9493825      | 229.4128113           | torch.Size([2, 512, 8, 4])       |
| 1160    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -63.8128052       | 66.9410400       | 0.9493825      | 229.4128113           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1161    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.point_matmul                       | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1161    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.point_matmul                       | input_1             | torch.float32 |         | -63.8128052       | 66.9410400       | 0.9493825      | 229.4128113           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1161    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.point_matmul                       | output              | torch.float32 |         | -101.0093918      | 99.9086609       | 0.0732355      | 103.1971436           | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1162    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.10.point_sum                          | input               | torch.float32 |         | -101.0093918      | 99.9086609       | 0.0732355      | 103.1971436           | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1162    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.10.point_sum                          | output              | torch.float32 |         | -103.9291840      | 104.2897797      | 0.2929421      | 408.1387024           | torch.Size([2, 6, 512, 8, 4])    |
| 1163    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -103.9291840      | 104.2897797      | 0.2929421      | 408.1387024           | torch.Size([2, 6, 512, 8, 4])    |
| 1163    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -67.8526154       | 66.2347488       | -0.5126898     | 454.1628113           | torch.Size([2, 6, 512, 8, 1])    |
| 1164    | torch.clamp                                                                       | head.layers.10                                    | input               | torch.float32 |         | -67.8526154       | 66.2347488       | -0.5126898     | 454.1628113           | torch.Size([2, 6, 512, 8, 1])    |
| 1164    | torch.clamp                                                                       | head.layers.10                                    | output              | torch.float32 |         | 0.0000100         | 66.2347488       | 7.6946025      | 160.7083130           | torch.Size([2, 6, 512, 8, 1])    |
| 1165    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.10.reciprocal_op                      | input               | torch.float32 |         | 0.0000100         | 66.2347488       | 7.6946025      | 160.7083130           | torch.Size([2, 6, 512, 8, 1])    |
| 1165    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.10.reciprocal_op                      | output              | torch.float32 |         | 0.0150978         | 100000.0000000   | 55761.9687500  | 2466825216.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 1166    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | -103.9291840      | 104.2897797      | 0.2929421      | 408.1387024           | torch.Size([2, 6, 512, 8, 4])    |
| 1166    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | -103.9291840      | 104.2897797      | 0.3422291      | 588.6255493           | torch.Size([2, 6, 512, 8, 2])    |
| 1167    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.point_mul                          | input_0             | torch.float32 |         | -103.9291840      | 104.2897797      | 0.3422291      | 588.6255493           | torch.Size([2, 6, 512, 8, 2])    |
| 1167    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.point_mul                          | input_1             | torch.float32 |         | 0.0150978         | 100000.0000000   | 55761.9687500  | 2466825216.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 1167    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.point_mul                          | output              | torch.float32 |         | -10165063.0000000 | 10428978.0000000 | 178867.5468750 | 2915732553728.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 1168    | torch.Tensor.flatten                                                              | head.layers.10                                    | input               | torch.float32 |         | -10165063.0000000 | 10428978.0000000 | 178867.5468750 | 2915732553728.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 1168    | torch.Tensor.flatten                                                              | head.layers.10                                    | output              | torch.float32 |         | -10165063.0000000 | 10428978.0000000 | 178867.5468750 | 2915732553728.0000000 | torch.Size([12, 512, 8, 2])      |
| 1169    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.10                                    | input_0             | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 1169    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.10                                    | input_1             | torch.float32 |         | -10165063.0000000 | 10428978.0000000 | 178867.5468750 | 2915732553728.0000000 | torch.Size([12, 512, 8, 2])      |
| 1169    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.10                                    | output              | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([12, 256, 512, 8])    |
| 1170    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.feat_cat                           | input               | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([12, 256, 512, 8])    |
| 1170    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.feat_cat                           | output              | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([12, 256, 512, 8])    |
| 1171    | torch.Tensor.view                                                                 | head.layers.10                                    | input_0             | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([12, 256, 512, 8])    |
| 1171    | torch.Tensor.view                                                                 | head.layers.10                                    | output              | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([2, 6, 256, 512, 8])  |
| 1172    | torch.Tensor.permute                                                              | head.layers.10                                    | input_0             | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([2, 6, 256, 512, 8])  |
| 1172    | torch.Tensor.permute                                                              | head.layers.10                                    | output              | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([2, 512, 6, 8, 256])  |
| 1173    | torch.Tensor.contiguous                                                           | head.layers.10                                    | input               | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811217             | torch.Size([2, 512, 6, 8, 256])  |
| 1173    | torch.Tensor.contiguous                                                           | head.layers.10                                    | output              | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811222             | torch.Size([2, 512, 6, 8, 256])  |
| 1174    | torch.Tensor.view                                                                 | head.layers.10                                    | input_0             | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811222             | torch.Size([2, 512, 6, 8, 256])  |
| 1174    | torch.Tensor.view                                                                 | head.layers.10                                    | output              | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811222             | torch.Size([2, 512, 48, 256])    |
| 1175    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | input_0             | torch.float32 |         | 0.0000005         | 0.4889559        | 0.0208333      | 0.0011968             | torch.Size([2, 512, 48, 8])      |
| 1175    | torch.Tensor.__getitem__                                                          | head.layers.10                                    | output              | torch.float32 |         | 0.0000005         | 0.4889559        | 0.0208333      | 0.0011968             | torch.Size([2, 512, 48, 8, 1])   |
| 1176    | torch.Tensor.reshape                                                              | head.layers.10                                    | input_0             | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811222             | torch.Size([2, 512, 48, 256])    |
| 1176    | torch.Tensor.reshape                                                              | head.layers.10                                    | output              | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811222             | torch.Size([2, 512, 48, 8, 32])  |
| 1177    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.feat_mul                           | input_0             | torch.float32 |         | 0.0000005         | 0.4889559        | 0.0208333      | 0.0011968             | torch.Size([2, 512, 48, 8, 1])   |
| 1177    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.feat_mul                           | input_1             | torch.float32 |         | -43.3496399       | 28.1113815       | 0.0197854      | 2.7811222             | torch.Size([2, 512, 48, 8, 32])  |
| 1177    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.10.feat_mul                           | output              | torch.float32 |         | -4.1893563        | 4.2916169        | 0.0002528      | 0.0038493             | torch.Size([2, 512, 48, 8, 32])  |
| 1178    | torch.Tensor.view                                                                 | head.layers.10                                    | input_0             | torch.float32 |         | -4.1893563        | 4.2916169        | 0.0002528      | 0.0038493             | torch.Size([2, 512, 48, 8, 32])  |
| 1178    | torch.Tensor.view                                                                 | head.layers.10                                    | output              | torch.float32 |         | -4.1893563        | 4.2916169        | 0.0002528      | 0.0038493             | torch.Size([2, 512, 48, 256])    |
| 1179    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.10.feat_sum                           | input               | torch.float32 |         | -4.1893563        | 4.2916169        | 0.0002528      | 0.0038493             | torch.Size([2, 512, 48, 256])    |
| 1179    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.10.feat_sum                           | output              | torch.float32 |         | -5.3740215        | 5.2039099        | 0.0121367      | 0.3217037             | torch.Size([2, 512, 256])        |
| 1180    | torch.nn.modules.linear.Linear                                                    | head.layers.10.output_proj                        | input               | torch.float32 |         | -5.3740215        | 5.2039099        | 0.0121367      | 0.3217037             | torch.Size([2, 512, 256])        |
| 1180    | torch.nn.modules.linear.Linear                                                    | head.layers.10.output_proj                        | weight              | torch.float32 |         | -0.2663807        | 0.2879749        | 0.0001328      | 0.0059484             | torch.Size([256, 256])           |
| 1180    | torch.nn.modules.linear.Linear                                                    | head.layers.10.output_proj                        | bias                | torch.float32 |         | -0.0821608        | 0.1140266        | 0.0010564      | 0.0009855             | torch.Size([256])                |
| 1180    | torch.nn.modules.linear.Linear                                                    | head.layers.10.output_proj                        | output              | torch.float32 |         | -5.7058530        | 6.4481091        | 0.0304368      | 0.7316073             | torch.Size([2, 512, 256])        |
| 1181    | torch.nn.modules.dropout.Dropout                                                  | head.layers.10.proj_drop                          | input               | torch.float32 |         | -5.7058530        | 6.4481091        | 0.0304368      | 0.7316073             | torch.Size([2, 512, 256])        |
| 1181    | torch.nn.modules.dropout.Dropout                                                  | head.layers.10.proj_drop                          | output              | torch.float32 |         | -5.7058530        | 6.4481091        | 0.0304368      | 0.7316073             | torch.Size([2, 512, 256])        |
| 1182    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.residual_op                        | input_0             | torch.float32 |         | -5.7058530        | 6.4481091        | 0.0304368      | 0.7316073             | torch.Size([2, 512, 256])        |
| 1182    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.residual_op                        | input_1             | torch.float32 |         | -7.6803398        | 5.6425967        | -0.0021828     | 0.7817355             | torch.Size([2, 512, 256])        |
| 1182    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.10.residual_op                        | output              | torch.float32 |         | -7.6803398        | 6.4481091        | 0.0141270      | 0.7569360             | torch.Size([2, 512, 512])        |
| 1183    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.input_mean.mean           | input_0             | torch.float32 |         | -7.6803398        | 6.4481091        | 0.0141270      | 0.7569360             | torch.Size([2, 512, 512])        |
| 1183    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.input_mean.mean           | output              | torch.float32 |         | -0.0373251        | 0.1002352        | 0.0141270      | 0.0002031             | torch.Size([2, 512, 1])          |
| 1184    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.11.pre_norm.sub                       | input_0             | torch.float32 |         | -7.6803398        | 6.4481091        | 0.0141270      | 0.7569360             | torch.Size([2, 512, 512])        |
| 1184    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.11.pre_norm.sub                       | input_1             | torch.float32 |         | -0.0373251        | 0.1002352        | 0.0141270      | 0.0002031             | torch.Size([2, 512, 1])          |
| 1184    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.11.pre_norm.sub                       | output              | torch.float32 |         | -7.7209644        | 6.3889122        | -0.0000000     | 0.7567331             | torch.Size([2, 512, 512])        |
| 1185    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.mul                       | input_0             | torch.float32 |         | -7.7209644        | 6.3889122        | -0.0000000     | 0.7567331             | torch.Size([2, 512, 512])        |
| 1185    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.mul                       | input_1             | torch.float32 |         | -7.7209644        | 6.3889122        | -0.0000000     | 0.7567331             | torch.Size([2, 512, 512])        |
| 1185    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.mul                       | output              | torch.float32 |         | 0.0000000         | 59.6132927       | 0.7567316      | 9.8052111             | torch.Size([2, 512, 512])        |
| 1186    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 59.6132927       | 0.7567316      | 9.8052111             | torch.Size([2, 512, 512])        |
| 1186    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.11.pre_norm.var_mean.mean             | output              | torch.float32 |         | 0.4424441         | 2.2526457        | 0.7567316      | 0.0531651             | torch.Size([2, 512, 1])          |
| 1187    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.11.pre_norm.rsqrt                     | input               | torch.float32 |         | 0.4424441         | 2.2526457        | 0.7567316      | 0.0531651             | torch.Size([2, 512, 1])          |
| 1187    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.11.pre_norm.rsqrt                     | output              | torch.float32 |         | 0.6662736         | 1.5033699        | 1.1825718      | 0.0236699             | torch.Size([2, 512, 1])          |
| 1188    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.out_mul                   | input_0             | torch.float32 |         | -7.7209644        | 6.3889122        | -0.0000000     | 0.7567331             | torch.Size([2, 512, 512])        |
| 1188    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.out_mul                   | input_1             | torch.float32 |         | 0.6662736         | 1.5033699        | 1.1825718      | 0.0236699             | torch.Size([2, 512, 1])          |
| 1188    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.out_mul                   | output              | torch.float32 |         | -10.3572969       | 7.9748559        | 0.0000000      | 0.9999876             | torch.Size([2, 512, 512])        |
| 1189    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.11.pre_norm.weight_quant              | input               | torch.float32 |         | 0.7318589         | 1.5822344        | 1.0533838      | 0.0550146             | torch.Size([512])                |
| 1189    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.11.pre_norm.weight_quant              | output              | torch.float32 |         | 0.7318589         | 1.5822344        | 1.0533838      | 0.0550146             | torch.Size([512])                |
| 1190    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.weight_mul                | input_0             | torch.float32 |         | -10.3572969       | 7.9748559        | 0.0000000      | 0.9999876             | torch.Size([2, 512, 512])        |
| 1190    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.weight_mul                | input_1             | torch.float32 |         | 0.7318589         | 1.5822344        | 1.0533838      | 0.0550146             | torch.Size([512])                |
| 1190    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.11.pre_norm.weight_mul                | output              | torch.float32 |         | -8.2227306        | 5.8364692        | 0.0016907      | 0.7484447             | torch.Size([2, 512, 512])        |
| 1191    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.11.pre_norm.bias_quant                | input               | torch.float32 |         | -0.1939566        | 0.1783928        | -0.0027595     | 0.0020715             | torch.Size([512])                |
| 1191    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.11.pre_norm.bias_quant                | output              | torch.float32 |         | -0.1939566        | 0.1783928        | -0.0027595     | 0.0020715             | torch.Size([512])                |
| 1192    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.11.pre_norm.bias_add                  | input_0             | torch.float32 |         | -8.2227306        | 5.8364692        | 0.0016907      | 0.7484447             | torch.Size([2, 512, 512])        |
| 1192    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.11.pre_norm.bias_add                  | input_1             | torch.float32 |         | -0.1939566        | 0.1783928        | -0.0027595     | 0.0020715             | torch.Size([512])                |
| 1192    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.11.pre_norm.bias_add                  | output              | torch.float32 |         | -8.0443382        | 5.6458049        | -0.0010688     | 0.7248935             | torch.Size([2, 512, 512])        |
| 1193    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.0.0                         | input               | torch.float32 |         | -8.0443382        | 5.6458049        | -0.0010688     | 0.7248935             | torch.Size([2, 512, 512])        |
| 1193    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.0.0                         | weight              | torch.float32 |         | -0.5279155        | 0.4437539        | -0.0006416     | 0.0056500             | torch.Size([1024, 512])          |
| 1193    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.0.0                         | bias                | torch.float32 |         | -0.1276487        | 0.0716278        | -0.0487325     | 0.0010013             | torch.Size([1024])               |
| 1193    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.0.0                         | output              | torch.float32 |         | -18.1393929       | 10.5071039       | -3.0093937     | 6.7754679             | torch.Size([2, 512, 1024])       |
| 1194    | torch.nn.modules.activation.ReLU                                                  | head.layers.11.activate                           | input               | torch.float32 |         | 0.0000000         | 10.5071039       | 0.1781600      | 0.4738370             | torch.Size([2, 512, 1024])       |
| 1194    | torch.nn.modules.activation.ReLU                                                  | head.layers.11.activate                           | output              | torch.float32 |         | 0.0000000         | 10.5071039       | 0.1781600      | 0.4738370             | torch.Size([2, 512, 1024])       |
| 1195    | torch.nn.modules.dropout.Dropout                                                  | head.layers.11.layers.0.2                         | input               | torch.float32 |         | 0.0000000         | 10.5071039       | 0.1781600      | 0.4738370             | torch.Size([2, 512, 1024])       |
| 1195    | torch.nn.modules.dropout.Dropout                                                  | head.layers.11.layers.0.2                         | output              | torch.float32 |         | 0.0000000         | 10.5071039       | 0.1781600      | 0.4738370             | torch.Size([2, 512, 1024])       |
| 1196    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.1                           | input               | torch.float32 |         | 0.0000000         | 10.5071039       | 0.1781600      | 0.4738370             | torch.Size([2, 512, 1024])       |
| 1196    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.1                           | weight              | torch.float32 |         | -0.5053306        | 0.4998906        | 0.0001121      | 0.0056677             | torch.Size([256, 1024])          |
| 1196    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.1                           | bias                | torch.float32 |         | -0.0872618        | 0.0770759        | -0.0007722     | 0.0009508             | torch.Size([256])                |
| 1196    | torch.nn.modules.linear.Linear                                                    | head.layers.11.layers.1                           | output              | torch.float32 |         | -18.0220470       | 15.4895277       | 0.0291723      | 9.7022715             | torch.Size([2, 512, 256])        |
| 1197    | torch.nn.modules.dropout.Dropout                                                  | head.layers.11.layers.2                           | input               | torch.float32 |         | -18.0220470       | 15.4895277       | 0.0291723      | 9.7022715             | torch.Size([2, 512, 256])        |
| 1197    | torch.nn.modules.dropout.Dropout                                                  | head.layers.11.layers.2                           | output              | torch.float32 |         | -18.0220470       | 15.4895277       | 0.0291723      | 9.7022715             | torch.Size([2, 512, 256])        |
| 1198    | torch.nn.modules.linear.Linear                                                    | head.layers.11.identity_fc                        | input               | torch.float32 |         | -8.0443382        | 5.6458049        | -0.0010688     | 0.7248935             | torch.Size([2, 512, 512])        |
| 1198    | torch.nn.modules.linear.Linear                                                    | head.layers.11.identity_fc                        | weight              | torch.float32 |         | -0.4656178        | 0.4816367        | -0.0002583     | 0.0071310             | torch.Size([256, 512])           |
| 1198    | torch.nn.modules.linear.Linear                                                    | head.layers.11.identity_fc                        | bias                | torch.float32 |         | -0.1430661        | 0.0827197        | -0.0009835     | 0.0011322             | torch.Size([256])                |
| 1198    | torch.nn.modules.linear.Linear                                                    | head.layers.11.identity_fc                        | output              | torch.float32 |         | -18.8483143       | 10.4704685       | -0.0074644     | 8.4334784             | torch.Size([2, 512, 256])        |
| 1199    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.11.short_add                          | input_0             | torch.float32 |         | -18.8483143       | 10.4704685       | -0.0074644     | 8.4334784             | torch.Size([2, 512, 256])        |
| 1199    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.11.short_add                          | input_1             | torch.float32 |         | -18.0220470       | 15.4895277       | 0.0291723      | 9.7022715             | torch.Size([2, 512, 256])        |
| 1199    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.11.short_add                          | output              | torch.float32 |         | -24.6874771       | 19.5555077       | 0.0217079      | 21.5781784            | torch.Size([2, 512, 256])        |
| 1200    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.12.input_mean.mean                    | input_0             | torch.float32 |         | -24.6874771       | 19.5555077       | 0.0217079      | 21.5781784            | torch.Size([2, 512, 256])        |
| 1200    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.12.input_mean.mean                    | output              | torch.float32 |         | -0.2116720        | 0.2231257        | 0.0217079      | 0.0146832             | torch.Size([2, 512, 1])          |
| 1201    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.12.sub                                | input_0             | torch.float32 |         | -24.6874771       | 19.5555077       | 0.0217079      | 21.5781784            | torch.Size([2, 512, 256])        |
| 1201    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.12.sub                                | input_1             | torch.float32 |         | -0.2116720        | 0.2231257        | 0.0217079      | 0.0146832             | torch.Size([2, 512, 1])          |
| 1201    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.12.sub                                | output              | torch.float32 |         | -24.9106026       | 19.3323822       | 0.0000000      | 21.5635090            | torch.Size([2, 512, 256])        |
| 1202    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.mul                                | input_0             | torch.float32 |         | -24.9106026       | 19.3323822       | 0.0000000      | 21.5635090            | torch.Size([2, 512, 256])        |
| 1202    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.mul                                | input_1             | torch.float32 |         | -24.9106026       | 19.3323822       | 0.0000000      | 21.5635090            | torch.Size([2, 512, 256])        |
| 1202    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.mul                                | output              | torch.float32 |         | 0.0000000         | 620.5381470      | 21.5634270     | 1875.2052002          | torch.Size([2, 512, 256])        |
| 1203    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.12.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 620.5381470      | 21.5634270     | 1875.2052002          | torch.Size([2, 512, 256])        |
| 1203    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.12.var_mean.mean                      | output              | torch.float32 |         | 5.6709552         | 49.9105377       | 21.5634270     | 276.1220703           | torch.Size([2, 512, 1])          |
| 1204    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.12.rsqrt                              | input               | torch.float32 |         | 5.6709552         | 49.9105377       | 21.5634270     | 276.1220703           | torch.Size([2, 512, 1])          |
| 1204    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.12.rsqrt                              | output              | torch.float32 |         | 0.1415480         | 0.4199248        | 0.2576637      | 0.0058427             | torch.Size([2, 512, 1])          |
| 1205    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.out_mul                            | input_0             | torch.float32 |         | -24.9106026       | 19.3323822       | 0.0000000      | 21.5635090            | torch.Size([2, 512, 256])        |
| 1205    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.out_mul                            | input_1             | torch.float32 |         | 0.1415480         | 0.4199248        | 0.2576637      | 0.0058427             | torch.Size([2, 512, 1])          |
| 1205    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.out_mul                            | output              | torch.float32 |         | -5.9552670        | 4.3795395        | 0.0000000      | 1.0000031             | torch.Size([2, 512, 256])        |
| 1206    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.12.weight_quant                       | input               | torch.float32 |         | 0.6993152         | 1.0544560        | 0.9030904      | 0.0036567             | torch.Size([256])                |
| 1206    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.12.weight_quant                       | output              | torch.float32 |         | 0.6993152         | 1.0544560        | 0.9030904      | 0.0036567             | torch.Size([256])                |
| 1207    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.weight_mul                         | input_0             | torch.float32 |         | -5.9552670        | 4.3795395        | 0.0000000      | 1.0000031             | torch.Size([2, 512, 256])        |
| 1207    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.weight_mul                         | input_1             | torch.float32 |         | 0.6993152         | 1.0544560        | 0.9030904      | 0.0036567             | torch.Size([256])                |
| 1207    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.12.weight_mul                         | output              | torch.float32 |         | -5.7294078        | 3.6411653        | -0.0002691     | 0.8264444             | torch.Size([2, 512, 256])        |
| 1208    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.12.bias_quant                         | input               | torch.float32 |         | -0.1003586        | 0.1476445        | 0.0017286      | 0.0014609             | torch.Size([256])                |
| 1208    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.12.bias_quant                         | output              | torch.float32 |         | -0.1003586        | 0.1476445        | 0.0017286      | 0.0014609             | torch.Size([256])                |
| 1209    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.12.bias_add                           | input_0             | torch.float32 |         | -5.7294078        | 3.6411653        | -0.0002691     | 0.8264444             | torch.Size([2, 512, 256])        |
| 1209    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.12.bias_add                           | input_1             | torch.float32 |         | -0.1003586        | 0.1476445        | 0.0017286      | 0.0014609             | torch.Size([256])                |
| 1209    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.12.bias_add                           | output              | torch.float32 |         | -5.6462903        | 3.6429195        | 0.0014595      | 0.8091727             | torch.Size([2, 512, 256])        |
| 1210    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.add1                               | input_0             | torch.float32 |         | -5.6462903        | 3.6429195        | 0.0014595      | 0.8091727             | torch.Size([2, 512, 256])        |
| 1210    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.add1                               | input_1             | torch.float32 |         | -1.7287153        | 7.8619442        | 0.0523564      | 0.8405592             | torch.Size([2, 512, 256])        |
| 1210    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.add1                               | output              | torch.float32 |         | -4.0018969        | 8.5633402        | 0.0538159      | 1.2816854             | torch.Size([2, 512, 256])        |
| 1211    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.0                           | input               | torch.float32 |         | -4.0018969        | 8.5633402        | 0.0538159      | 1.2816854             | torch.Size([2, 512, 256])        |
| 1211    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.0                           | weight              | torch.float32 |         | -0.6005406        | 0.4653489        | -0.0001235     | 0.0049280             | torch.Size([256, 256])           |
| 1211    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.0                           | bias                | torch.float32 |         | -0.2076813        | 0.0865848        | -0.0322298     | 0.0026380             | torch.Size([256])                |
| 1211    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.0                           | output              | torch.float32 |         | -10.8254414       | 10.5278549       | -0.7171094     | 5.2157598             | torch.Size([2, 512, 256])        |
| 1212    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.1                           | input               | torch.float32 |         | 0.0000000         | 10.5278549       | 0.5839310      | 1.1941520             | torch.Size([2, 512, 256])        |
| 1212    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.1                           | output              | torch.float32 |         | 0.0000000         | 10.5278549       | 0.5839310      | 1.1941520             | torch.Size([2, 512, 256])        |
| 1213    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.2                           | input               | torch.float32 |         | 0.0000000         | 10.5278549       | 0.5839310      | 1.1941520             | torch.Size([2, 512, 256])        |
| 1213    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.2                           | weight              | torch.float32 |         | -0.6167275        | 0.5256047        | -0.0056006     | 0.0049711             | torch.Size([256, 256])           |
| 1213    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.2                           | bias                | torch.float32 |         | -0.1263612        | 0.1803766        | -0.0060339     | 0.0029060             | torch.Size([256])                |
| 1213    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.2                           | output              | torch.float32 |         | -12.8107128       | 7.5948048        | -0.9112755     | 6.4527545             | torch.Size([2, 512, 256])        |
| 1214    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.3                           | input               | torch.float32 |         | 0.0000000         | 7.5948048        | 0.5772700      | 1.1945308             | torch.Size([2, 512, 256])        |
| 1214    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.3                           | output              | torch.float32 |         | 0.0000000         | 7.5948048        | 0.5772700      | 1.1945308             | torch.Size([2, 512, 256])        |
| 1215    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 7.5948048        | 0.5772700      | 1.1945308             | torch.Size([2, 512, 256])        |
| 1215    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.input_mean.mean           | output              | torch.float32 |         | 0.2651166         | 0.8585741        | 0.5772700      | 0.0174218             | torch.Size([2, 512, 1])          |
| 1216    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.13.layers.4.sub                       | input_0             | torch.float32 |         | 0.0000000         | 7.5948048        | 0.5772700      | 1.1945308             | torch.Size([2, 512, 256])        |
| 1216    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.13.layers.4.sub                       | input_1             | torch.float32 |         | 0.2651166         | 0.8585741        | 0.5772700      | 0.0174218             | torch.Size([2, 512, 1])          |
| 1216    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.13.layers.4.sub                       | output              | torch.float32 |         | -0.8585741        | 6.9844007        | -0.0000000     | 1.1771259             | torch.Size([2, 512, 256])        |
| 1217    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.mul                       | input_0             | torch.float32 |         | -0.8585741        | 6.9844007        | -0.0000000     | 1.1771259             | torch.Size([2, 512, 256])        |
| 1217    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.mul                       | input_1             | torch.float32 |         | -0.8585741        | 6.9844007        | -0.0000000     | 1.1771259             | torch.Size([2, 512, 256])        |
| 1217    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.mul                       | output              | torch.float32 |         | 0.0000000         | 48.7818527       | 1.1771214      | 11.1451702            | torch.Size([2, 512, 256])        |
| 1218    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 48.7818527       | 1.1771214      | 11.1451702            | torch.Size([2, 512, 256])        |
| 1218    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.4.var_mean.mean             | output              | torch.float32 |         | 0.2251546         | 2.4453237        | 1.1771214      | 0.5016664             | torch.Size([2, 512, 1])          |
| 1219    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.13.layers.4.rsqrt                     | input               | torch.float32 |         | 0.2251546         | 2.4453237        | 1.1771214      | 0.5016664             | torch.Size([2, 512, 1])          |
| 1219    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.13.layers.4.rsqrt                     | output              | torch.float32 |         | 0.6394858         | 2.1074145        | 1.0415933      | 0.0799970             | torch.Size([2, 512, 1])          |
| 1220    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.out_mul                   | input_0             | torch.float32 |         | -0.8585741        | 6.9844007        | -0.0000000     | 1.1771259             | torch.Size([2, 512, 256])        |
| 1220    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.out_mul                   | input_1             | torch.float32 |         | 0.6394858         | 2.1074145        | 1.0415933      | 0.0799970             | torch.Size([2, 512, 1])          |
| 1220    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.out_mul                   | output              | torch.float32 |         | -0.7265575        | 6.9990067        | -0.0000000     | 0.9999921             | torch.Size([2, 512, 256])        |
| 1221    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.4.weight_quant              | input               | torch.float32 |         | 0.6633201         | 1.2187128        | 0.9636809      | 0.0072749             | torch.Size([256])                |
| 1221    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.4.weight_quant              | output              | torch.float32 |         | 0.6633201         | 1.2187128        | 0.9636809      | 0.0072749             | torch.Size([256])                |
| 1222    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.weight_mul                | input_0             | torch.float32 |         | -0.7265575        | 6.9990067        | -0.0000000     | 0.9999921             | torch.Size([2, 512, 256])        |
| 1222    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.weight_mul                | input_1             | torch.float32 |         | 0.6633201         | 1.2187128        | 0.9636809      | 0.0072749             | torch.Size([256])                |
| 1222    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.4.weight_mul                | output              | torch.float32 |         | -0.8854649        | 7.2200718        | 0.0149625      | 0.9773770             | torch.Size([2, 512, 256])        |
| 1223    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.4.bias_quant                | input               | torch.float32 |         | -0.0931333        | 0.3241574        | 0.0448928      | 0.0063926             | torch.Size([256])                |
| 1223    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.4.bias_quant                | output              | torch.float32 |         | -0.0931333        | 0.3241574        | 0.0448928      | 0.0063926             | torch.Size([256])                |
| 1224    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.layers.4.bias_add                  | input_0             | torch.float32 |         | -0.8854649        | 7.2200718        | 0.0149625      | 0.9773770             | torch.Size([2, 512, 256])        |
| 1224    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.layers.4.bias_add                  | input_1             | torch.float32 |         | -0.0931333        | 0.3241574        | 0.0448928      | 0.0063926             | torch.Size([256])                |
| 1224    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.layers.4.bias_add                  | output              | torch.float32 |         | -0.8414509        | 7.1927629        | 0.0598553      | 0.9250544             | torch.Size([2, 512, 256])        |
| 1225    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.5                           | input               | torch.float32 |         | -0.8414509        | 7.1927629        | 0.0598553      | 0.9250544             | torch.Size([2, 512, 256])        |
| 1225    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.5                           | weight              | torch.float32 |         | -0.4115984        | 0.4671635        | 0.0042406      | 0.0040801             | torch.Size([256, 256])           |
| 1225    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.5                           | bias                | torch.float32 |         | -0.1536481        | 0.0778537        | -0.0241879     | 0.0025930             | torch.Size([256])                |
| 1225    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.5                           | output              | torch.float32 |         | -9.0551729        | 11.0114813       | -0.9869632     | 5.0499735             | torch.Size([2, 512, 256])        |
| 1226    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.6                           | input               | torch.float32 |         | 0.0000000         | 11.0114813       | 0.5284569      | 1.3062353             | torch.Size([2, 512, 256])        |
| 1226    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.6                           | output              | torch.float32 |         | 0.0000000         | 11.0114813       | 0.5284569      | 1.3062353             | torch.Size([2, 512, 256])        |
| 1227    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.7                           | input               | torch.float32 |         | 0.0000000         | 11.0114813       | 0.5284569      | 1.3062353             | torch.Size([2, 512, 256])        |
| 1227    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.7                           | weight              | torch.float32 |         | -0.6832550        | 0.4791626        | -0.0062377     | 0.0030764             | torch.Size([256, 256])           |
| 1227    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.7                           | bias                | torch.float32 |         | -0.1049601        | 0.1796888        | -0.0124101     | 0.0017829             | torch.Size([256])                |
| 1227    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.7                           | output              | torch.float32 |         | -13.9233103       | 29.0915756       | -1.7558854     | 8.1791477             | torch.Size([2, 512, 256])        |
| 1228    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.8                           | input               | torch.float32 |         | 0.0000000         | 29.0915756       | 0.4327121      | 3.2802188             | torch.Size([2, 512, 256])        |
| 1228    | torch.nn.modules.activation.ReLU                                                  | head.layers.13.layers.8                           | output              | torch.float32 |         | 0.0000000         | 29.0915756       | 0.4327121      | 3.2802188             | torch.Size([2, 512, 256])        |
| 1229    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 29.0915756       | 0.4327121      | 3.2802188             | torch.Size([2, 512, 256])        |
| 1229    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.input_mean.mean           | output              | torch.float32 |         | 0.2585636         | 0.7307808        | 0.4327121      | 0.0139879             | torch.Size([2, 512, 1])          |
| 1230    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.13.layers.9.sub                       | input_0             | torch.float32 |         | 0.0000000         | 29.0915756       | 0.4327121      | 3.2802188             | torch.Size([2, 512, 256])        |
| 1230    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.13.layers.9.sub                       | input_1             | torch.float32 |         | 0.2585636         | 0.7307808        | 0.4327121      | 0.0139879             | torch.Size([2, 512, 1])          |
| 1230    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.13.layers.9.sub                       | output              | torch.float32 |         | -0.7307808        | 28.7114925       | -0.0000000     | 3.2662444             | torch.Size([2, 512, 256])        |
| 1231    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.mul                       | input_0             | torch.float32 |         | -0.7307808        | 28.7114925       | -0.0000000     | 3.2662444             | torch.Size([2, 512, 256])        |
| 1231    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.mul                       | input_1             | torch.float32 |         | -0.7307808        | 28.7114925       | -0.0000000     | 3.2662444             | torch.Size([2, 512, 256])        |
| 1231    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.mul                       | output              | torch.float32 |         | 0.0000000         | 824.3497925      | 3.2662318      | 903.9132080           | torch.Size([2, 512, 256])        |
| 1232    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 824.3497925      | 3.2662318      | 903.9132080           | torch.Size([2, 512, 256])        |
| 1232    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.13.layers.9.var_mean.mean             | output              | torch.float32 |         | 0.6962491         | 5.7187099        | 3.2662320      | 0.7568471             | torch.Size([2, 512, 1])          |
| 1233    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.13.layers.9.rsqrt                     | input               | torch.float32 |         | 0.6962491         | 5.7187099        | 3.2662320      | 0.7568471             | torch.Size([2, 512, 1])          |
| 1233    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.13.layers.9.rsqrt                     | output              | torch.float32 |         | 0.4181678         | 1.1984352        | 0.5708579      | 0.0084284             | torch.Size([2, 512, 1])          |
| 1234    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.out_mul                   | input_0             | torch.float32 |         | -0.7307808        | 28.7114925       | -0.0000000     | 3.2662444             | torch.Size([2, 512, 256])        |
| 1234    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.out_mul                   | input_1             | torch.float32 |         | 0.4181678         | 1.1984352        | 0.5708579      | 0.0084284             | torch.Size([2, 512, 1])          |
| 1234    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.out_mul                   | output              | torch.float32 |         | -0.4745664        | 12.6547318       | -0.0000000     | 1.0000005             | torch.Size([2, 512, 256])        |
| 1235    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.9.weight_quant              | input               | torch.float32 |         | 0.8125745         | 1.0292015        | 0.9108959      | 0.0012828             | torch.Size([256])                |
| 1235    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.9.weight_quant              | output              | torch.float32 |         | 0.8125745         | 1.0292015        | 0.9108959      | 0.0012828             | torch.Size([256])                |
| 1236    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.weight_mul                | input_0             | torch.float32 |         | -0.4745664        | 12.6547318       | -0.0000000     | 1.0000005             | torch.Size([2, 512, 256])        |
| 1236    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.weight_mul                | input_1             | torch.float32 |         | 0.8125745         | 1.0292015        | 0.9108959      | 0.0012828             | torch.Size([256])                |
| 1236    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.9.weight_mul                | output              | torch.float32 |         | -0.4884245        | 10.5385752       | -0.0007191     | 0.7613146             | torch.Size([2, 512, 256])        |
| 1237    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.9.bias_quant                | input               | torch.float32 |         | -0.1482258        | 0.1146019        | 0.0601919      | 0.0022211             | torch.Size([256])                |
| 1237    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.9.bias_quant                | output              | torch.float32 |         | -0.1482258        | 0.1146019        | 0.0601919      | 0.0022211             | torch.Size([256])                |
| 1238    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.layers.9.bias_add                  | input_0             | torch.float32 |         | -0.4884245        | 10.5385752       | -0.0007191     | 0.7613146             | torch.Size([2, 512, 256])        |
| 1238    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.layers.9.bias_add                  | input_1             | torch.float32 |         | -0.1482258        | 0.1146019        | 0.0601919      | 0.0022211             | torch.Size([256])                |
| 1238    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.layers.9.bias_add                  | output              | torch.float32 |         | -0.5434340        | 10.3903494       | 0.0594728      | 0.7174311             | torch.Size([2, 512, 256])        |
| 1239    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.10                          | input               | torch.float32 |         | -0.5434340        | 10.3903494       | 0.0594728      | 0.7174311             | torch.Size([2, 512, 256])        |
| 1239    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.10                          | weight              | torch.float32 |         | -0.3740715        | 0.2434908        | -0.0008235     | 0.0021038             | torch.Size([11, 256])            |
| 1239    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.10                          | bias                | torch.float32 |         | -0.0558710        | 0.0500459        | -0.0099527     | 0.0010864             | torch.Size([11])                 |
| 1239    | torch.nn.modules.linear.Linear                                                    | head.layers.13.layers.10                          | output              | torch.float32 |         | -6.7024021        | 7.6591611        | -0.0047388     | 0.8441770             | torch.Size([2, 512, 11])         |
| 1240    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.11.scale_quant_stub         | input               | torch.float32 |         | 0.1286822         | 0.7985592        | 0.4143039      | 0.0426970             | torch.Size([11])                 |
| 1240    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.13.layers.11.scale_quant_stub         | output              | torch.float32 |         | 0.1286822         | 0.7985592        | 0.4143039      | 0.0426970             | torch.Size([11])                 |
| 1241    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.11.mul                      | input_0             | torch.float32 |         | -6.7024021        | 7.6591611        | -0.0047388     | 0.8441770             | torch.Size([2, 512, 11])         |
| 1241    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.11.mul                      | input_1             | torch.float32 |         | 0.1286822         | 0.7985592        | 0.4143039      | 0.0426970             | torch.Size([11])                 |
| 1241    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.13.layers.11.mul                      | output              | torch.float32 |         | -5.3522654        | 5.2809305        | -0.0007662     | 0.3520849             | torch.Size([2, 512, 11])         |
| 1242    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.add2                               | input_0             | torch.float32 |         | -5.3522654        | 5.2809305        | -0.0007662     | 0.3520849             | torch.Size([2, 512, 11])         |
| 1242    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.add2                               | input_1             | torch.float32 |         | -53.6162720       | 53.6826859       | 0.2125989      | 79.3594742            | torch.Size([2, 512, 11])         |
| 1242    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.13.add2                               | output              | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1243    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(1)                                   | input               | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1243    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(1)                                   | output              | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1244    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1244    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -53.5874481       | 53.7214432       | 0.7427100      | 286.9937744           | torch.Size([2, 512, 3])          |
| 1245    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(3)                   | input               | torch.float32 |         | -53.5874481       | 53.7214432       | 0.7427100      | 286.9937744           | torch.Size([2, 512, 3])          |
| 1245    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(3)                   | weight              | torch.float32 |         | -0.9216561        | 0.9167990        | -0.0046354     | 0.1373587             | torch.Size([128, 3])             |
| 1245    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(3)                   | bias                | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3650480             | torch.Size([128])                |
| 1245    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(3)                   | output              | torch.float32 |         | -33.0884247       | 34.4853172       | -0.1120320     | 70.2825165            | torch.Size([2, 512, 128])        |
| 1246    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(3)                   | input               | torch.float32 |         | 0.0000000         | 34.4853172       | 2.8689475      | 25.8691502            | torch.Size([2, 512, 128])        |
| 1246    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(3)                   | output              | torch.float32 |         | 0.0000000         | 34.4853172       | 2.8689475      | 25.8691502            | torch.Size([2, 512, 128])        |
| 1247    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 34.4853172       | 2.8689475      | 25.8691502            | torch.Size([2, 512, 128])        |
| 1247    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(3)   | output              | torch.float32 |         | 0.2320359         | 7.3089504        | 2.8689475      | 4.0827446             | torch.Size([2, 512, 1])          |
| 1248    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 34.4853172       | 2.8689475      | 25.8691502            | torch.Size([2, 512, 128])        |
| 1248    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(3)               | input_1             | torch.float32 |         | 0.2320359         | 7.3089504        | 2.8689475      | 4.0827446             | torch.Size([2, 512, 1])          |
| 1248    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(3)               | output              | torch.float32 |         | -7.3089504        | 28.8417778       | 0.0000000      | 21.7903614            | torch.Size([2, 512, 128])        |
| 1249    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(3)               | input_0             | torch.float32 |         | -7.3089504        | 28.8417778       | 0.0000000      | 21.7903614            | torch.Size([2, 512, 128])        |
| 1249    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(3)               | input_1             | torch.float32 |         | -7.3089504        | 28.8417778       | 0.0000000      | 21.7903614            | torch.Size([2, 512, 128])        |
| 1249    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(3)               | output              | torch.float32 |         | 0.0000000         | 831.8481445      | 21.7901955     | 2592.3190918          | torch.Size([2, 512, 128])        |
| 1250    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 831.8481445      | 21.7901955     | 2592.3190918          | torch.Size([2, 512, 128])        |
| 1250    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(3)     | output              | torch.float32 |         | 0.1048559         | 75.2692719       | 21.7901955     | 464.7112427           | torch.Size([2, 512, 1])          |
| 1251    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(3)             | input               | torch.float32 |         | 0.1048559         | 75.2692719       | 21.7901955     | 464.7112427           | torch.Size([2, 512, 1])          |
| 1251    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(3)             | output              | torch.float32 |         | 0.1152633         | 3.0880404        | 0.9604777      | 1.4929918             | torch.Size([2, 512, 1])          |
| 1252    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(3)           | input_0             | torch.float32 |         | -7.3089504        | 28.8417778       | 0.0000000      | 21.7903614            | torch.Size([2, 512, 128])        |
| 1252    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(3)           | input_1             | torch.float32 |         | 0.1152633         | 3.0880404        | 0.9604777      | 1.4929918             | torch.Size([2, 512, 1])          |
| 1252    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(3)           | output              | torch.float32 |         | -0.8850133        | 3.8292515        | -0.0000000     | 0.9999835             | torch.Size([2, 512, 128])        |
| 1253    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(3)      | input               | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 1253    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(3)      | output              | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 1254    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(3)        | input_0             | torch.float32 |         | -0.8850133        | 3.8292515        | -0.0000000     | 0.9999835             | torch.Size([2, 512, 128])        |
| 1254    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(3)        | input_1             | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 1254    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(3)        | output              | torch.float32 |         | -1.0500964        | 4.8768106        | 0.0012207      | 0.9536830             | torch.Size([2, 512, 128])        |
| 1255    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(3)        | input               | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 1255    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(3)        | output              | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 1256    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(3)          | input_0             | torch.float32 |         | -1.0500964        | 4.8768106        | 0.0012207      | 0.9536830             | torch.Size([2, 512, 128])        |
| 1256    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(3)          | input_1             | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 1256    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(3)          | output              | torch.float32 |         | -1.0503926        | 4.8730516        | 0.0100411      | 0.9470173             | torch.Size([2, 512, 128])        |
| 1257    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(3)                   | input               | torch.float32 |         | -1.0503926        | 4.8730516        | 0.0100411      | 0.9470173             | torch.Size([2, 512, 128])        |
| 1257    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(3)                   | weight              | torch.float32 |         | -0.3750711        | 0.3968706        | 0.0019093      | 0.0048458             | torch.Size([128, 128])           |
| 1257    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(3)                   | bias                | torch.float32 |         | -0.1863807        | 0.1385574        | -0.0156467     | 0.0047256             | torch.Size([128])                |
| 1257    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(3)                   | output              | torch.float32 |         | -7.5591207        | 7.5510058        | -0.0990045     | 3.4491897             | torch.Size([2, 512, 128])        |
| 1258    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(3)                   | input               | torch.float32 |         | 0.0000000         | 7.5510058        | 0.6423523      | 1.2276748             | torch.Size([2, 512, 128])        |
| 1258    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(3)                   | output              | torch.float32 |         | 0.0000000         | 7.5510058        | 0.6423523      | 1.2276748             | torch.Size([2, 512, 128])        |
| 1259    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 7.5510058        | 0.6423523      | 1.2276748             | torch.Size([2, 512, 128])        |
| 1259    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(3)   | output              | torch.float32 |         | 0.2880361         | 1.3417258        | 0.6423523      | 0.1646018             | torch.Size([2, 512, 1])          |
| 1260    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 7.5510058        | 0.6423523      | 1.2276748             | torch.Size([2, 512, 128])        |
| 1260    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(3)               | input_1             | torch.float32 |         | 0.2880361         | 1.3417258        | 0.6423523      | 0.1646018             | torch.Size([2, 512, 1])          |
| 1260    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(3)               | output              | torch.float32 |         | -1.3417258        | 6.2092800        | -0.0000000     | 1.0632327             | torch.Size([2, 512, 128])        |
| 1261    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(3)               | input_0             | torch.float32 |         | -1.3417258        | 6.2092800        | -0.0000000     | 1.0632327             | torch.Size([2, 512, 128])        |
| 1261    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(3)               | input_1             | torch.float32 |         | -1.3417258        | 6.2092800        | -0.0000000     | 1.0632327             | torch.Size([2, 512, 128])        |
| 1261    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(3)               | output              | torch.float32 |         | 0.0000000         | 38.5551567       | 1.0632244      | 7.5853386             | torch.Size([2, 512, 128])        |
| 1262    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 38.5551567       | 1.0632244      | 7.5853386             | torch.Size([2, 512, 128])        |
| 1262    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(3)     | output              | torch.float32 |         | 0.3058959         | 2.8010161        | 1.0632244      | 0.9634361             | torch.Size([2, 512, 1])          |
| 1263    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(3)             | input               | torch.float32 |         | 0.3058959         | 2.8010161        | 1.0632244      | 0.9634361             | torch.Size([2, 512, 1])          |
| 1263    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(3)             | output              | torch.float32 |         | 0.5975049         | 1.8080318        | 1.2378291      | 0.1599427             | torch.Size([2, 512, 1])          |
| 1264    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(3)           | input_0             | torch.float32 |         | -1.3417258        | 6.2092800        | -0.0000000     | 1.0632327             | torch.Size([2, 512, 128])        |
| 1264    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(3)           | input_1             | torch.float32 |         | 0.5975049         | 1.8080318        | 1.2378291      | 0.1599427             | torch.Size([2, 512, 1])          |
| 1264    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(3)           | output              | torch.float32 |         | -0.8045583        | 7.0666199        | -0.0000000     | 0.9999907             | torch.Size([2, 512, 128])        |
| 1265    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(3)      | input               | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 1265    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(3)      | output              | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 1266    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(3)        | input_0             | torch.float32 |         | -0.8045583        | 7.0666199        | -0.0000000     | 0.9999907             | torch.Size([2, 512, 128])        |
| 1266    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(3)        | input_1             | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 1266    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(3)        | output              | torch.float32 |         | -0.9412995        | 6.9438519        | 0.0358765      | 0.9534937             | torch.Size([2, 512, 128])        |
| 1267    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(3)        | input               | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 1267    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(3)        | output              | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 1268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(3)          | input_0             | torch.float32 |         | -0.9412995        | 6.9438519        | 0.0358765      | 0.9534937             | torch.Size([2, 512, 128])        |
| 1268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(3)          | input_1             | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 1268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(3)          | output              | torch.float32 |         | -0.9409930        | 6.9403076        | 0.0676788      | 0.9281579             | torch.Size([2, 512, 128])        |
| 1269    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(3)                   | input               | torch.float32 |         | -0.9409930        | 6.9403076        | 0.0676788      | 0.9281579             | torch.Size([2, 512, 128])        |
| 1269    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(3)                   | weight              | torch.float32 |         | -0.7504157        | 0.4182976        | -0.0024651     | 0.0052447             | torch.Size([128, 128])           |
| 1269    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(3)                   | bias                | torch.float32 |         | -0.1397866        | 0.1210779        | 0.0064616      | 0.0040949             | torch.Size([128])                |
| 1269    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(3)                   | output              | torch.float32 |         | -9.5338116        | 6.9650645        | -0.0399435     | 4.9778380             | torch.Size([2, 512, 128])        |
| 1270    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(3)                   | input               | torch.float32 |         | 0.0000000         | 6.9650645        | 0.8347771      | 1.5621340             | torch.Size([2, 512, 128])        |
| 1270    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(3)                   | output              | torch.float32 |         | 0.0000000         | 6.9650645        | 0.8347771      | 1.5621340             | torch.Size([2, 512, 128])        |
| 1271    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 6.9650645        | 0.8347771      | 1.5621340             | torch.Size([2, 512, 128])        |
| 1271    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(3)   | output              | torch.float32 |         | 0.5523615         | 1.3287582        | 0.8347771      | 0.0780002             | torch.Size([2, 512, 1])          |
| 1272    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 6.9650645        | 0.8347771      | 1.5621340             | torch.Size([2, 512, 128])        |
| 1272    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(3)               | input_1             | torch.float32 |         | 0.5523615         | 1.3287582        | 0.8347771      | 0.0780002             | torch.Size([2, 512, 1])          |
| 1272    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(3)               | output              | torch.float32 |         | -1.3287582        | 6.1732445        | 0.0000000      | 1.4842092             | torch.Size([2, 512, 128])        |
| 1273    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(3)               | input_0             | torch.float32 |         | -1.3287582        | 6.1732445        | 0.0000000      | 1.4842092             | torch.Size([2, 512, 128])        |
| 1273    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(3)               | input_1             | torch.float32 |         | -1.3287582        | 6.1732445        | 0.0000000      | 1.4842092             | torch.Size([2, 512, 128])        |
| 1273    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(3)               | output              | torch.float32 |         | 0.0000000         | 38.1089478       | 1.4841979      | 9.3563805             | torch.Size([2, 512, 128])        |
| 1274    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 38.1089478       | 1.4841979      | 9.3563805             | torch.Size([2, 512, 128])        |
| 1274    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(3)     | output              | torch.float32 |         | 0.8229649         | 2.7221603        | 1.4841979      | 0.4862571             | torch.Size([2, 512, 1])          |
| 1275    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(3)             | input               | torch.float32 |         | 0.8229649         | 2.7221603        | 1.4841979      | 0.4862571             | torch.Size([2, 512, 1])          |
| 1275    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(3)             | output              | torch.float32 |         | 0.6060973         | 1.1023176        | 0.8768950      | 0.0266797             | torch.Size([2, 512, 1])          |
| 1276    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(3)           | input_0             | torch.float32 |         | -1.3287582        | 6.1732445        | 0.0000000      | 1.4842092             | torch.Size([2, 512, 128])        |
| 1276    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(3)           | input_1             | torch.float32 |         | 0.6060973         | 1.1023176        | 0.8768950      | 0.0266797             | torch.Size([2, 512, 1])          |
| 1276    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(3)           | output              | torch.float32 |         | -0.8053569        | 5.0268879        | -0.0000000     | 0.9999997             | torch.Size([2, 512, 128])        |
| 1277    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(3)      | input               | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 1277    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(3)      | output              | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 1278    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(3)        | input_0             | torch.float32 |         | -0.8053569        | 5.0268879        | -0.0000000     | 0.9999997             | torch.Size([2, 512, 128])        |
| 1278    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(3)        | input_1             | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 1278    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(3)        | output              | torch.float32 |         | -0.9060112        | 5.2395873        | 0.0158491      | 0.9935773             | torch.Size([2, 512, 128])        |
| 1279    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(3)        | input               | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 1279    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(3)        | output              | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 1280    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(3)          | input_0             | torch.float32 |         | -0.9060112        | 5.2395873        | 0.0158491      | 0.9935773             | torch.Size([2, 512, 128])        |
| 1280    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(3)          | input_1             | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 1280    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(3)          | output              | torch.float32 |         | -0.8926874        | 5.2639050        | 0.0374871      | 0.9796066             | torch.Size([2, 512, 128])        |
| 1281    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(3)                   | input               | torch.float32 |         | -0.8926874        | 5.2639050        | 0.0374871      | 0.9796066             | torch.Size([2, 512, 128])        |
| 1281    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(3)                   | weight              | torch.float32 |         | -0.4264432        | 0.3183554        | 0.0005866      | 0.0053991             | torch.Size([128, 128])           |
| 1281    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(3)                   | bias                | torch.float32 |         | -0.1690418        | 0.1536980        | -0.0166056     | 0.0039884             | torch.Size([128])                |
| 1281    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(3)                   | output              | torch.float32 |         | -11.7832813       | 10.5399885       | -0.4095693     | 4.4036813             | torch.Size([2, 512, 128])        |
| 1282    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(3)                  | input               | torch.float32 |         | 0.0000000         | 10.5399885       | 0.6280511      | 1.5290695             | torch.Size([2, 512, 128])        |
| 1282    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(3)                  | output              | torch.float32 |         | 0.0000000         | 10.5399885       | 0.6280511      | 1.5290695             | torch.Size([2, 512, 128])        |
| 1283    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(3)  | input_0             | torch.float32 |         | 0.0000000         | 10.5399885       | 0.6280511      | 1.5290695             | torch.Size([2, 512, 128])        |
| 1283    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(3)  | output              | torch.float32 |         | 0.5212005         | 0.7423940        | 0.6280511      | 0.0020932             | torch.Size([2, 512, 1])          |
| 1284    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(3)              | input_0             | torch.float32 |         | 0.0000000         | 10.5399885       | 0.6280511      | 1.5290695             | torch.Size([2, 512, 128])        |
| 1284    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(3)              | input_1             | torch.float32 |         | 0.5212005         | 0.7423940        | 0.6280511      | 0.0020932             | torch.Size([2, 512, 1])          |
| 1284    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(3)              | output              | torch.float32 |         | -0.7423940        | 9.9833040        | 0.0000000      | 1.5269784             | torch.Size([2, 512, 128])        |
| 1285    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(3)              | input_0             | torch.float32 |         | -0.7423940        | 9.9833040        | 0.0000000      | 1.5269784             | torch.Size([2, 512, 128])        |
| 1285    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(3)              | input_1             | torch.float32 |         | -0.7423940        | 9.9833040        | 0.0000000      | 1.5269784             | torch.Size([2, 512, 128])        |
| 1285    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(3)              | output              | torch.float32 |         | 0.0000000         | 99.6663589       | 1.5269668      | 24.3871326            | torch.Size([2, 512, 128])        |
| 1286    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(3)    | input_0             | torch.float32 |         | 0.0000000         | 99.6663589       | 1.5269668      | 24.3871326            | torch.Size([2, 512, 128])        |
| 1286    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(3)    | output              | torch.float32 |         | 1.0560397         | 1.9571509        | 1.5269667      | 0.0508079             | torch.Size([2, 512, 1])          |
| 1287    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(3)            | input               | torch.float32 |         | 1.0560397         | 1.9571509        | 1.5269667      | 0.0508079             | torch.Size([2, 512, 1])          |
| 1287    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(3)            | output              | torch.float32 |         | 0.7148036         | 0.9731008        | 0.8161625      | 0.0038928             | torch.Size([2, 512, 1])          |
| 1288    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(3)          | input_0             | torch.float32 |         | -0.7423940        | 9.9833040        | 0.0000000      | 1.5269784             | torch.Size([2, 512, 128])        |
| 1288    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(3)          | input_1             | torch.float32 |         | 0.7148036         | 0.9731008        | 0.8161625      | 0.0038928             | torch.Size([2, 512, 1])          |
| 1288    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(3)          | output              | torch.float32 |         | -0.6109548        | 7.5560484        | 0.0000000      | 1.0000010             | torch.Size([2, 512, 128])        |
| 1289    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(3)     | input               | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 1289    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(3)     | output              | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 1290    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(3)       | input_0             | torch.float32 |         | -0.6109548        | 7.5560484        | 0.0000000      | 1.0000010             | torch.Size([2, 512, 128])        |
| 1290    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(3)       | input_1             | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 1290    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(3)       | output              | torch.float32 |         | -0.8491216        | 7.6325989        | 0.0095040      | 0.9033922             | torch.Size([2, 512, 128])        |
| 1291    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(3)       | input               | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 1291    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(3)       | output              | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 1292    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(3)         | input_0             | torch.float32 |         | -0.8491216        | 7.6325989        | 0.0095040      | 0.9033922             | torch.Size([2, 512, 128])        |
| 1292    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(3)         | input_1             | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 1292    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(3)         | output              | torch.float32 |         | -0.8532427        | 7.5853052        | 0.0714943      | 0.8688896             | torch.Size([2, 512, 128])        |
| 1293    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1293    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -0.9958311        | 2.7298889        | 0.2833154      | 0.3630393             | torch.Size([2, 512, 3])          |
| 1294    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(3)                  | input               | torch.float32 |         | -0.9958311        | 2.7298889        | 0.2833154      | 0.3630393             | torch.Size([2, 512, 3])          |
| 1294    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(3)                  | weight              | torch.float32 |         | -0.8288664        | 0.6362330        | 0.0683853      | 0.1118651             | torch.Size([32, 3])              |
| 1294    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(3)                  | bias                | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1068659             | torch.Size([32])                 |
| 1294    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(3)                  | output              | torch.float32 |         | -1.9697022        | 2.3432424        | 0.1212648      | 0.2318539             | torch.Size([2, 512, 32])         |
| 1295    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(3)                  | input               | torch.float32 |         | 0.0000000         | 2.3432424        | 0.2591443      | 0.0984248             | torch.Size([2, 512, 32])         |
| 1295    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(3)                  | output              | torch.float32 |         | 0.0000000         | 2.3432424        | 0.2591443      | 0.0984248             | torch.Size([2, 512, 32])         |
| 1296    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(3)  | input_0             | torch.float32 |         | 0.0000000         | 2.3432424        | 0.2591443      | 0.0984248             | torch.Size([2, 512, 32])         |
| 1296    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(3)  | output              | torch.float32 |         | 0.1616453         | 0.6745529        | 0.2591443      | 0.0128592             | torch.Size([2, 512, 1])          |
| 1297    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(3)              | input_0             | torch.float32 |         | 0.0000000         | 2.3432424        | 0.2591443      | 0.0984248             | torch.Size([2, 512, 32])         |
| 1297    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(3)              | input_1             | torch.float32 |         | 0.1616453         | 0.6745529        | 0.2591443      | 0.0128592             | torch.Size([2, 512, 1])          |
| 1297    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(3)              | output              | torch.float32 |         | -0.6745529        | 1.6686895        | -0.0000000     | 0.0855778             | torch.Size([2, 512, 32])         |
| 1298    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(3)              | input_0             | torch.float32 |         | -0.6745529        | 1.6686895        | -0.0000000     | 0.0855778             | torch.Size([2, 512, 32])         |
| 1298    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(3)              | input_1             | torch.float32 |         | -0.6745529        | 1.6686895        | -0.0000000     | 0.0855778             | torch.Size([2, 512, 32])         |
| 1298    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(3)              | output              | torch.float32 |         | 0.0000000         | 2.7845247        | 0.0855752      | 0.0260295             | torch.Size([2, 512, 32])         |
| 1299    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(3)    | input_0             | torch.float32 |         | 0.0000000         | 2.7845247        | 0.0855752      | 0.0260295             | torch.Size([2, 512, 32])         |
| 1299    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(3)    | output              | torch.float32 |         | 0.0319960         | 0.4379449        | 0.0855752      | 0.0047296             | torch.Size([2, 512, 1])          |
| 1300    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(3)            | input               | torch.float32 |         | 0.0319960         | 0.4379449        | 0.0855752      | 0.0047296             | torch.Size([2, 512, 1])          |
| 1300    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(3)            | output              | torch.float32 |         | 1.5110724         | 5.5896492        | 4.0843906      | 1.4855742             | torch.Size([2, 512, 1])          |
| 1301    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(3)          | input_0             | torch.float32 |         | -0.6745529        | 1.6686895        | -0.0000000     | 0.0855778             | torch.Size([2, 512, 32])         |
| 1301    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(3)          | input_1             | torch.float32 |         | 1.5110724         | 5.5896492        | 4.0843906      | 1.4855742             | torch.Size([2, 512, 1])          |
| 1301    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(3)          | output              | torch.float32 |         | -1.0977298        | 3.0547600        | -0.0000000     | 0.9998488             | torch.Size([2, 512, 32])         |
| 1302    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(3)     | input               | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 1302    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(3)     | output              | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 1303    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(3)       | input_0             | torch.float32 |         | -1.0977298        | 3.0547600        | -0.0000000     | 0.9998488             | torch.Size([2, 512, 32])         |
| 1303    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(3)       | input_1             | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 1303    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(3)       | output              | torch.float32 |         | -1.3103307        | 3.2750378        | 0.0068687      | 0.9848234             | torch.Size([2, 512, 32])         |
| 1304    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(3)       | input               | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 1304    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(3)       | output              | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 1305    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(3)         | input_0             | torch.float32 |         | -1.3103307        | 3.2750378        | 0.0068687      | 0.9848234             | torch.Size([2, 512, 32])         |
| 1305    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(3)         | input_1             | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 1305    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(3)         | output              | torch.float32 |         | -1.2874926        | 3.2714169        | 0.0103950      | 0.9263473             | torch.Size([2, 512, 32])         |
| 1306    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(3)                  | input               | torch.float32 |         | -1.2874926        | 3.2714169        | 0.0103950      | 0.9263473             | torch.Size([2, 512, 32])         |
| 1306    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(3)                  | weight              | torch.float32 |         | -0.5793310        | 0.5422795        | -0.0032135     | 0.0176575             | torch.Size([32, 32])             |
| 1306    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(3)                  | bias                | torch.float32 |         | -0.1716317        | 0.2230143        | 0.0007250      | 0.0126328             | torch.Size([32])                 |
| 1306    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(3)                  | output              | torch.float32 |         | -4.2661686        | 2.1649475        | -0.1990345     | 1.4343745             | torch.Size([2, 512, 32])         |
| 1307    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(3)                  | input               | torch.float32 |         | 0.0000000         | 2.1649475        | 0.3727338      | 0.2658879             | torch.Size([2, 512, 32])         |
| 1307    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(3)                  | output              | torch.float32 |         | 0.0000000         | 2.1649475        | 0.3727338      | 0.2658879             | torch.Size([2, 512, 32])         |
| 1308    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(3)  | input_0             | torch.float32 |         | 0.0000000         | 2.1649475        | 0.3727338      | 0.2658879             | torch.Size([2, 512, 32])         |
| 1308    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(3)  | output              | torch.float32 |         | 0.2681682         | 0.4244095        | 0.3727338      | 0.0012514             | torch.Size([2, 512, 1])          |
| 1309    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(3)              | input_0             | torch.float32 |         | 0.0000000         | 2.1649475        | 0.3727338      | 0.2658879             | torch.Size([2, 512, 32])         |
| 1309    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(3)              | input_1             | torch.float32 |         | 0.2681682         | 0.4244095        | 0.3727338      | 0.0012514             | torch.Size([2, 512, 1])          |
| 1309    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(3)              | output              | torch.float32 |         | -0.4244095        | 1.8206973        | -0.0000000     | 0.2646377             | torch.Size([2, 512, 32])         |
| 1310    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(3)              | input_0             | torch.float32 |         | -0.4244095        | 1.8206973        | -0.0000000     | 0.2646377             | torch.Size([2, 512, 32])         |
| 1310    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(3)              | input_1             | torch.float32 |         | -0.4244095        | 1.8206973        | -0.0000000     | 0.2646377             | torch.Size([2, 512, 32])         |
| 1310    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(3)              | output              | torch.float32 |         | 0.0000000         | 3.3149388        | 0.2646296      | 0.2060953             | torch.Size([2, 512, 32])         |
| 1311    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(3)    | input_0             | torch.float32 |         | 0.0000000         | 3.3149388        | 0.2646296      | 0.2060953             | torch.Size([2, 512, 32])         |
| 1311    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(3)    | output              | torch.float32 |         | 0.1541288         | 0.3593265        | 0.2646296      | 0.0049907             | torch.Size([2, 512, 1])          |
| 1312    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(3)            | input               | torch.float32 |         | 0.1541288         | 0.3593265        | 0.2646296      | 0.0049907             | torch.Size([2, 512, 1])          |
| 1312    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(3)            | output              | torch.float32 |         | 1.6682047         | 2.5470889        | 2.0032830      | 0.0880822             | torch.Size([2, 512, 1])          |
| 1313    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(3)          | input_0             | torch.float32 |         | -0.4244095        | 1.8206973        | -0.0000000     | 0.2646377             | torch.Size([2, 512, 32])         |
| 1313    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(3)          | input_1             | torch.float32 |         | 1.6682047         | 2.5470889        | 2.0032830      | 0.0880822             | torch.Size([2, 512, 1])          |
| 1313    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(3)          | output              | torch.float32 |         | -0.9113805        | 3.8577158        | -0.0000000     | 0.9999894             | torch.Size([2, 512, 32])         |
| 1314    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(3)     | input               | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 1314    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(3)     | output              | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 1315    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(3)       | input_0             | torch.float32 |         | -0.9113805        | 3.8577158        | -0.0000000     | 0.9999894             | torch.Size([2, 512, 32])         |
| 1315    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(3)       | input_1             | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 1315    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(3)       | output              | torch.float32 |         | -0.9195275        | 3.6864405        | 0.0111371      | 0.9992426             | torch.Size([2, 512, 32])         |
| 1316    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(3)       | input               | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 1316    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(3)       | output              | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 1317    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(3)         | input_0             | torch.float32 |         | -0.9195275        | 3.6864405        | 0.0111371      | 0.9992426             | torch.Size([2, 512, 32])         |
| 1317    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(3)         | input_1             | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 1317    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(3)         | output              | torch.float32 |         | -0.9341540        | 3.7144630        | 0.0208992      | 0.9676508             | torch.Size([2, 512, 32])         |
| 1318    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(3)                  | input               | torch.float32 |         | -0.9341540        | 3.7144630        | 0.0208992      | 0.9676508             | torch.Size([2, 512, 32])         |
| 1318    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(3)                  | weight              | torch.float32 |         | -0.5712157        | 0.5219681        | -0.0062917     | 0.0166056             | torch.Size([32, 32])             |
| 1318    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(3)                  | bias                | torch.float32 |         | -0.1649730        | 0.2318604        | 0.0253026      | 0.0136139             | torch.Size([32])                 |
| 1318    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(3)                  | output              | torch.float32 |         | -4.3227019        | 2.6448612        | -0.1925761     | 1.4086167             | torch.Size([2, 512, 32])         |
| 1319    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(3)                  | input               | torch.float32 |         | 0.0000000         | 2.6448612        | 0.3703848      | 0.2741562             | torch.Size([2, 512, 32])         |
| 1319    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(3)                  | output              | torch.float32 |         | 0.0000000         | 2.6448612        | 0.3703848      | 0.2741562             | torch.Size([2, 512, 32])         |
| 1320    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(3)  | input_0             | torch.float32 |         | 0.0000000         | 2.6448612        | 0.3703848      | 0.2741562             | torch.Size([2, 512, 32])         |
| 1320    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(3)  | output              | torch.float32 |         | 0.1870642         | 0.4823619        | 0.3703848      | 0.0097973             | torch.Size([2, 512, 1])          |
| 1321    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(3)              | input_0             | torch.float32 |         | 0.0000000         | 2.6448612        | 0.3703848      | 0.2741562             | torch.Size([2, 512, 32])         |
| 1321    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(3)              | input_1             | torch.float32 |         | 0.1870642         | 0.4823619        | 0.3703848      | 0.0097973             | torch.Size([2, 512, 1])          |
| 1321    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(3)              | output              | torch.float32 |         | -0.4823619        | 2.2085414        | 0.0000000      | 0.2643682             | torch.Size([2, 512, 32])         |
| 1322    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(3)              | input_0             | torch.float32 |         | -0.4823619        | 2.2085414        | 0.0000000      | 0.2643682             | torch.Size([2, 512, 32])         |
| 1322    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(3)              | input_1             | torch.float32 |         | -0.4823619        | 2.2085414        | 0.0000000      | 0.2643682             | torch.Size([2, 512, 32])         |
| 1322    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(3)              | output              | torch.float32 |         | 0.0000000         | 4.8776550        | 0.2643601      | 0.2659959             | torch.Size([2, 512, 32])         |
| 1323    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(3)    | input_0             | torch.float32 |         | 0.0000000         | 4.8776550        | 0.2643601      | 0.2659959             | torch.Size([2, 512, 32])         |
| 1323    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(3)    | output              | torch.float32 |         | 0.1374372         | 0.3907049        | 0.2643601      | 0.0061995             | torch.Size([2, 512, 1])          |
| 1324    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(3)            | input               | torch.float32 |         | 0.1374372         | 0.3907049        | 0.2643601      | 0.0061995             | torch.Size([2, 512, 1])          |
| 1324    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(3)            | output              | torch.float32 |         | 1.5998158         | 2.6973178        | 2.0265470      | 0.1301001             | torch.Size([2, 512, 1])          |
| 1325    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(3)          | input_0             | torch.float32 |         | -0.4823619        | 2.2085414        | 0.0000000      | 0.2643682             | torch.Size([2, 512, 32])         |
| 1325    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(3)          | input_1             | torch.float32 |         | 1.5998158         | 2.6973178        | 2.0265470      | 0.1301001             | torch.Size([2, 512, 1])          |
| 1325    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(3)          | output              | torch.float32 |         | -0.9530660        | 3.8883650        | 0.0000000      | 0.9999881             | torch.Size([2, 512, 32])         |
| 1326    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(3)     | input               | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 1326    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(3)     | output              | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 1327    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(3)       | input_0             | torch.float32 |         | -0.9530660        | 3.8883650        | 0.0000000      | 0.9999881             | torch.Size([2, 512, 32])         |
| 1327    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(3)       | input_1             | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 1327    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(3)       | output              | torch.float32 |         | -1.0784400        | 4.0162349        | 0.0054362      | 1.0227309             | torch.Size([2, 512, 32])         |
| 1328    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(3)       | input               | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 1328    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(3)       | output              | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 1329    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(3)         | input_0             | torch.float32 |         | -1.0784400        | 4.0162349        | 0.0054362      | 1.0227309             | torch.Size([2, 512, 32])         |
| 1329    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(3)         | input_1             | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 1329    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(3)         | output              | torch.float32 |         | -1.0473592        | 4.0411634        | 0.0096324      | 0.9979615             | torch.Size([2, 512, 32])         |
| 1330    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(3)                  | input               | torch.float32 |         | -1.0473592        | 4.0411634        | 0.0096324      | 0.9979615             | torch.Size([2, 512, 32])         |
| 1330    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(3)                  | weight              | torch.float32 |         | -0.3204980        | 0.3365203        | -0.0020388     | 0.0145364             | torch.Size([32, 32])             |
| 1330    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(3)                  | bias                | torch.float32 |         | -0.1559148        | 0.2119379        | 0.0091616      | 0.0105488             | torch.Size([32])                 |
| 1330    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(3)                  | output              | torch.float32 |         | -2.3230715        | 2.6682758        | 0.0114652      | 0.8286761             | torch.Size([2, 512, 32])         |
| 1331    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(3)                 | input               | torch.float32 |         | 0.0000000         | 2.6682758        | 0.3658422      | 0.2959315             | torch.Size([2, 512, 32])         |
| 1331    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(3)                 | output              | torch.float32 |         | 0.0000000         | 2.6682758        | 0.3658422      | 0.2959315             | torch.Size([2, 512, 32])         |
| 1332    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(3) | input_0             | torch.float32 |         | 0.0000000         | 2.6682758        | 0.3658422      | 0.2959315             | torch.Size([2, 512, 32])         |
| 1332    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(3) | output              | torch.float32 |         | 0.2787464         | 0.5683384        | 0.3658422      | 0.0018704             | torch.Size([2, 512, 1])          |
| 1333    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(3)             | input_0             | torch.float32 |         | 0.0000000         | 2.6682758        | 0.3658422      | 0.2959315             | torch.Size([2, 512, 32])         |
| 1333    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(3)             | input_1             | torch.float32 |         | 0.2787464         | 0.5683384        | 0.3658422      | 0.0018704             | torch.Size([2, 512, 1])          |
| 1333    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(3)             | output              | torch.float32 |         | -0.5683384        | 2.2622340        | -0.0000000     | 0.2940629             | torch.Size([2, 512, 32])         |
| 1334    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(3)             | input_0             | torch.float32 |         | -0.5683384        | 2.2622340        | -0.0000000     | 0.2940629             | torch.Size([2, 512, 32])         |
| 1334    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(3)             | input_1             | torch.float32 |         | -0.5683384        | 2.2622340        | -0.0000000     | 0.2940629             | torch.Size([2, 512, 32])         |
| 1334    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(3)             | output              | torch.float32 |         | 0.0000000         | 5.1177025        | 0.2940540      | 0.4045033             | torch.Size([2, 512, 32])         |
| 1335    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 5.1177025        | 0.2940540      | 0.4045033             | torch.Size([2, 512, 32])         |
| 1335    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(3)   | output              | torch.float32 |         | 0.1786091         | 0.4105859        | 0.2940540      | 0.0014195             | torch.Size([2, 512, 1])          |
| 1336    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(3)           | input               | torch.float32 |         | 0.1786091         | 0.4105859        | 0.2940540      | 0.0014195             | torch.Size([2, 512, 1])          |
| 1336    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(3)           | output              | torch.float32 |         | 1.5606039         | 2.3661160        | 1.8559742      | 0.0155425             | torch.Size([2, 512, 1])          |
| 1337    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(3)         | input_0             | torch.float32 |         | -0.5683384        | 2.2622340        | -0.0000000     | 0.2940629             | torch.Size([2, 512, 32])         |
| 1337    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(3)         | input_1             | torch.float32 |         | 1.5606039         | 2.3661160        | 1.8559742      | 0.0155425             | torch.Size([2, 512, 1])          |
| 1337    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(3)         | output              | torch.float32 |         | -1.1340287        | 3.9995568        | -0.0000000     | 0.9999958             | torch.Size([2, 512, 32])         |
| 1338    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(3)    | input               | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 1338    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(3)    | output              | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 1339    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(3)      | input_0             | torch.float32 |         | -1.1340287        | 3.9995568        | -0.0000000     | 0.9999958             | torch.Size([2, 512, 32])         |
| 1339    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(3)      | input_1             | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 1339    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(3)      | output              | torch.float32 |         | -1.7783531        | 5.7696714        | -0.0381120     | 1.4185481             | torch.Size([2, 512, 32])         |
| 1340    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(3)      | input               | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 1340    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(3)      | output              | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 1341    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(3)        | input_0             | torch.float32 |         | -1.7783531        | 5.7696714        | -0.0381120     | 1.4185481             | torch.Size([2, 512, 32])         |
| 1341    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(3)        | input_1             | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 1341    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(3)        | output              | torch.float32 |         | -1.7289910        | 5.8493681        | 0.0064566      | 1.3257949             | torch.Size([2, 512, 32])         |
| 1342    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1342    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -1.1331533        | 1.1089326        | -0.0393762     | 0.1007427             | torch.Size([2, 512, 2])          |
| 1343    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(3)                   | input               | torch.float32 |         | -1.1331533        | 1.1089326        | -0.0393762     | 0.1007427             | torch.Size([2, 512, 2])          |
| 1343    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(3)                   | weight              | torch.float32 |         | -0.7023237        | 0.7394427        | 0.0490668      | 0.1972211             | torch.Size([32, 2])              |
| 1343    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(3)                   | bias                | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1641774             | torch.Size([32])                 |
| 1343    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(3)                   | output              | torch.float32 |         | -1.5429184        | 1.2059331        | -0.1214760     | 0.1984729             | torch.Size([2, 512, 32])         |
| 1344    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(3)                   | input               | torch.float32 |         | 0.0000000         | 1.2059331        | 0.1333394      | 0.0551842             | torch.Size([2, 512, 32])         |
| 1344    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(3)                   | output              | torch.float32 |         | 0.0000000         | 1.2059331        | 0.1333394      | 0.0551842             | torch.Size([2, 512, 32])         |
| 1345    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 1.2059331        | 0.1333394      | 0.0551842             | torch.Size([2, 512, 32])         |
| 1345    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(3)   | output              | torch.float32 |         | 0.1085071         | 0.2466291        | 0.1333394      | 0.0006545             | torch.Size([2, 512, 1])          |
| 1346    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 1.2059331        | 0.1333394      | 0.0551842             | torch.Size([2, 512, 32])         |
| 1346    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(3)               | input_1             | torch.float32 |         | 0.1085071         | 0.2466291        | 0.1333394      | 0.0006545             | torch.Size([2, 512, 1])          |
| 1346    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(3)               | output              | torch.float32 |         | -0.2466291        | 0.9788855        | -0.0000000     | 0.0545303             | torch.Size([2, 512, 32])         |
| 1347    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(3)               | input_0             | torch.float32 |         | -0.2466291        | 0.9788855        | -0.0000000     | 0.0545303             | torch.Size([2, 512, 32])         |
| 1347    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(3)               | input_1             | torch.float32 |         | -0.2466291        | 0.9788855        | -0.0000000     | 0.0545303             | torch.Size([2, 512, 32])         |
| 1347    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(3)               | output              | torch.float32 |         | 0.0000000         | 0.9582168        | 0.0545287      | 0.0110035             | torch.Size([2, 512, 32])         |
| 1348    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 0.9582168        | 0.0545287      | 0.0110035             | torch.Size([2, 512, 32])         |
| 1348    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(3)     | output              | torch.float32 |         | 0.0406100         | 0.1295710        | 0.0545287      | 0.0003164             | torch.Size([2, 512, 1])          |
| 1349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(3)             | input               | torch.float32 |         | 0.0406100         | 0.1295710        | 0.0545287      | 0.0003164             | torch.Size([2, 512, 1])          |
| 1349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(3)             | output              | torch.float32 |         | 2.7779820         | 4.9616933        | 4.3974404      | 0.2495604             | torch.Size([2, 512, 1])          |
| 1350    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(3)           | input_0             | torch.float32 |         | -0.2466291        | 0.9788855        | -0.0000000     | 0.0545303             | torch.Size([2, 512, 32])         |
| 1350    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(3)           | input_1             | torch.float32 |         | 2.7779820         | 4.9616933        | 4.3974404      | 0.2495604             | torch.Size([2, 512, 1])          |
| 1350    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(3)           | output              | torch.float32 |         | -0.6898143        | 4.0054893        | -0.0000000     | 0.9998347             | torch.Size([2, 512, 32])         |
| 1351    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(3)      | input               | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 1351    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(3)      | output              | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 1352    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(3)        | input_0             | torch.float32 |         | -0.6898143        | 4.0054893        | -0.0000000     | 0.9998347             | torch.Size([2, 512, 32])         |
| 1352    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(3)        | input_1             | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 1352    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(3)        | output              | torch.float32 |         | -0.8104170        | 4.3372908        | 0.0038910      | 1.0115026             | torch.Size([2, 512, 32])         |
| 1353    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(3)        | input               | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 1353    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(3)        | output              | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 1354    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(3)          | input_0             | torch.float32 |         | -0.8104170        | 4.3372908        | 0.0038910      | 1.0115026             | torch.Size([2, 512, 32])         |
| 1354    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(3)          | input_1             | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 1354    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(3)          | output              | torch.float32 |         | -0.7773572        | 4.2568293        | 0.0323949      | 0.9319103             | torch.Size([2, 512, 32])         |
| 1355    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(3)                   | input               | torch.float32 |         | -0.7773572        | 4.2568293        | 0.0323949      | 0.9319103             | torch.Size([2, 512, 32])         |
| 1355    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(3)                   | weight              | torch.float32 |         | -1.0547366        | 0.5812716        | 0.0070099      | 0.0187704             | torch.Size([32, 32])             |
| 1355    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(3)                   | bias                | torch.float32 |         | -0.2183180        | 0.1396109        | -0.0140744     | 0.0103446             | torch.Size([32])                 |
| 1355    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(3)                   | output              | torch.float32 |         | -5.3412642        | 1.6981708        | -0.5340468     | 1.4850482             | torch.Size([2, 512, 32])         |
| 1356    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(3)                   | input               | torch.float32 |         | 0.0000000         | 1.6981708        | 0.2296623      | 0.1264110             | torch.Size([2, 512, 32])         |
| 1356    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(3)                   | output              | torch.float32 |         | 0.0000000         | 1.6981708        | 0.2296623      | 0.1264110             | torch.Size([2, 512, 32])         |
| 1357    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 1.6981708        | 0.2296623      | 0.1264110             | torch.Size([2, 512, 32])         |
| 1357    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(3)   | output              | torch.float32 |         | 0.1708691         | 0.3781126        | 0.2296623      | 0.0007798             | torch.Size([2, 512, 1])          |
| 1358    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 1.6981708        | 0.2296623      | 0.1264110             | torch.Size([2, 512, 32])         |
| 1358    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(3)               | input_1             | torch.float32 |         | 0.1708691         | 0.3781126        | 0.2296623      | 0.0007798             | torch.Size([2, 512, 1])          |
| 1358    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(3)               | output              | torch.float32 |         | -0.3781126        | 1.4339657        | 0.0000000      | 0.1256319             | torch.Size([2, 512, 32])         |
| 1359    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(3)               | input_0             | torch.float32 |         | -0.3781126        | 1.4339657        | 0.0000000      | 0.1256319             | torch.Size([2, 512, 32])         |
| 1359    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(3)               | input_1             | torch.float32 |         | -0.3781126        | 1.4339657        | 0.0000000      | 0.1256319             | torch.Size([2, 512, 32])         |
| 1359    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(3)               | output              | torch.float32 |         | 0.0000000         | 2.0562575        | 0.1256281      | 0.0524239             | torch.Size([2, 512, 32])         |
| 1360    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 2.0562575        | 0.1256281      | 0.0524239             | torch.Size([2, 512, 32])         |
| 1360    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(3)     | output              | torch.float32 |         | 0.0791977         | 0.2590959        | 0.1256281      | 0.0005566             | torch.Size([2, 512, 1])          |
| 1361    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(3)             | input               | torch.float32 |         | 0.0791977         | 0.2590959        | 0.1256281      | 0.0005566             | torch.Size([2, 512, 1])          |
| 1361    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(3)             | output              | torch.float32 |         | 1.9645420         | 3.5531714        | 2.8532517      | 0.0561189             | torch.Size([2, 512, 1])          |
| 1362    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(3)           | input_0             | torch.float32 |         | -0.3781126        | 1.4339657        | 0.0000000      | 0.1256319             | torch.Size([2, 512, 32])         |
| 1362    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(3)           | input_1             | torch.float32 |         | 1.9645420         | 3.5531714        | 2.8532517      | 0.0561189             | torch.Size([2, 512, 1])          |
| 1362    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(3)           | output              | torch.float32 |         | -0.7622544        | 3.5703292        | 0.0000000      | 0.9999486             | torch.Size([2, 512, 32])         |
| 1363    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(3)      | input               | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 1363    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(3)      | output              | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 1364    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(3)        | input_0             | torch.float32 |         | -0.7622544        | 3.5703292        | 0.0000000      | 0.9999486             | torch.Size([2, 512, 32])         |
| 1364    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(3)        | input_1             | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 1364    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(3)        | output              | torch.float32 |         | -0.8535855        | 3.6434591        | -0.0020922     | 0.9792065             | torch.Size([2, 512, 32])         |
| 1365    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(3)        | input               | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 1365    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(3)        | output              | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 1366    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(3)          | input_0             | torch.float32 |         | -0.8535855        | 3.6434591        | -0.0020922     | 0.9792065             | torch.Size([2, 512, 32])         |
| 1366    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(3)          | input_1             | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 1366    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(3)          | output              | torch.float32 |         | -0.8388913        | 3.6026428        | 0.0221521      | 0.9227403             | torch.Size([2, 512, 32])         |
| 1367    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(3)                   | input               | torch.float32 |         | -0.8388913        | 3.6026428        | 0.0221521      | 0.9227403             | torch.Size([2, 512, 32])         |
| 1367    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(3)                   | weight              | torch.float32 |         | -0.4480607        | 0.3678726        | 0.0004879      | 0.0160908             | torch.Size([32, 32])             |
| 1367    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(3)                   | bias                | torch.float32 |         | -0.1861591        | 0.1739754        | 0.0155446      | 0.0137690             | torch.Size([32])                 |
| 1367    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(3)                   | output              | torch.float32 |         | -3.6787336        | 2.4413767        | -0.3115238     | 1.5595520             | torch.Size([2, 512, 32])         |
| 1368    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(3)                   | input               | torch.float32 |         | 0.0000000         | 2.4413767        | 0.3342814      | 0.1952498             | torch.Size([2, 512, 32])         |
| 1368    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(3)                   | output              | torch.float32 |         | 0.0000000         | 2.4413767        | 0.3342814      | 0.1952498             | torch.Size([2, 512, 32])         |
| 1369    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 2.4413767        | 0.3342814      | 0.1952498             | torch.Size([2, 512, 32])         |
| 1369    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(3)   | output              | torch.float32 |         | 0.2535476         | 0.4075041        | 0.3342814      | 0.0003936             | torch.Size([2, 512, 1])          |
| 1370    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 2.4413767        | 0.3342814      | 0.1952498             | torch.Size([2, 512, 32])         |
| 1370    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(3)               | input_1             | torch.float32 |         | 0.2535476         | 0.4075041        | 0.3342814      | 0.0003936             | torch.Size([2, 512, 1])          |
| 1370    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(3)               | output              | torch.float32 |         | -0.4075041        | 2.1657996        | 0.0000000      | 0.1948566             | torch.Size([2, 512, 32])         |
| 1371    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(3)               | input_0             | torch.float32 |         | -0.4075041        | 2.1657996        | 0.0000000      | 0.1948566             | torch.Size([2, 512, 32])         |
| 1371    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(3)               | input_1             | torch.float32 |         | -0.4075041        | 2.1657996        | 0.0000000      | 0.1948566             | torch.Size([2, 512, 32])         |
| 1371    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(3)               | output              | torch.float32 |         | 0.0000000         | 4.6906881        | 0.1948506      | 0.1015614             | torch.Size([2, 512, 32])         |
| 1372    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 4.6906881        | 0.1948506      | 0.1015614             | torch.Size([2, 512, 32])         |
| 1372    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(3)     | output              | torch.float32 |         | 0.1581586         | 0.2804516        | 0.1948506      | 0.0003179             | torch.Size([2, 512, 1])          |
| 1373    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(3)             | input               | torch.float32 |         | 0.1581586         | 0.2804516        | 0.1948506      | 0.0003179             | torch.Size([2, 512, 1])          |
| 1373    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(3)             | output              | torch.float32 |         | 1.8882666         | 2.5144317        | 2.2717638      | 0.0089602             | torch.Size([2, 512, 1])          |
| 1374    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(3)           | input_0             | torch.float32 |         | -0.4075041        | 2.1657996        | 0.0000000      | 0.1948566             | torch.Size([2, 512, 32])         |
| 1374    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(3)           | input_1             | torch.float32 |         | 1.8882666         | 2.5144317        | 2.2717638      | 0.0089602             | torch.Size([2, 512, 1])          |
| 1374    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(3)           | output              | torch.float32 |         | -0.8643999        | 4.4299169        | 0.0000000      | 0.9999789             | torch.Size([2, 512, 32])         |
| 1375    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(3)      | input               | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 1375    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(3)      | output              | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 1376    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(3)        | input_0             | torch.float32 |         | -0.8643999        | 4.4299169        | 0.0000000      | 0.9999789             | torch.Size([2, 512, 32])         |
| 1376    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(3)        | input_1             | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 1376    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(3)        | output              | torch.float32 |         | -0.9586589        | 4.7229567        | -0.0040248     | 0.9946425             | torch.Size([2, 512, 32])         |
| 1377    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(3)        | input               | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 1377    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(3)        | output              | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 1378    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(3)          | input_0             | torch.float32 |         | -0.9586589        | 4.7229567        | -0.0040248     | 0.9946425             | torch.Size([2, 512, 32])         |
| 1378    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(3)          | input_1             | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 1378    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(3)          | output              | torch.float32 |         | -0.9571756        | 4.7445626        | 0.0031449      | 0.9706282             | torch.Size([2, 512, 32])         |
| 1379    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(3)                   | input               | torch.float32 |         | -0.9571756        | 4.7445626        | 0.0031449      | 0.9706282             | torch.Size([2, 512, 32])         |
| 1379    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(3)                   | weight              | torch.float32 |         | -0.5597425        | 0.7001730        | 0.0015679      | 0.0160348             | torch.Size([32, 32])             |
| 1379    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(3)                   | bias                | torch.float32 |         | -0.1810580        | 0.1736723        | -0.0279047     | 0.0091159             | torch.Size([32])                 |
| 1379    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(3)                   | output              | torch.float32 |         | -4.3248143        | 3.0633049        | -0.2481656     | 1.2399104             | torch.Size([2, 512, 32])         |
| 1380    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(3)                  | input               | torch.float32 |         | 0.0000000         | 3.0633049        | 0.2835215      | 0.3423086             | torch.Size([2, 512, 32])         |
| 1380    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(3)                  | output              | torch.float32 |         | 0.0000000         | 3.0633049        | 0.2835215      | 0.3423086             | torch.Size([2, 512, 32])         |
| 1381    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(3)  | input_0             | torch.float32 |         | 0.0000000         | 3.0633049        | 0.2835215      | 0.3423086             | torch.Size([2, 512, 32])         |
| 1381    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(3)  | output              | torch.float32 |         | 0.2223001         | 0.3945178        | 0.2835215      | 0.0013110             | torch.Size([2, 512, 1])          |
| 1382    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(3)              | input_0             | torch.float32 |         | 0.0000000         | 3.0633049        | 0.2835215      | 0.3423086             | torch.Size([2, 512, 32])         |
| 1382    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(3)              | input_1             | torch.float32 |         | 0.2223001         | 0.3945178        | 0.2835215      | 0.0013110             | torch.Size([2, 512, 1])          |
| 1382    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(3)              | output              | torch.float32 |         | -0.3945178        | 2.7781553        | -0.0000000     | 0.3409989             | torch.Size([2, 512, 32])         |
| 1383    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(3)              | input_0             | torch.float32 |         | -0.3945178        | 2.7781553        | -0.0000000     | 0.3409989             | torch.Size([2, 512, 32])         |
| 1383    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(3)              | input_1             | torch.float32 |         | -0.3945178        | 2.7781553        | -0.0000000     | 0.3409989             | torch.Size([2, 512, 32])         |
| 1383    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(3)              | output              | torch.float32 |         | 0.0000000         | 7.7181468        | 0.3409885      | 1.1528070             | torch.Size([2, 512, 32])         |
| 1384    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(3)    | input_0             | torch.float32 |         | 0.0000000         | 7.7181468        | 0.3409885      | 1.1528070             | torch.Size([2, 512, 32])         |
| 1384    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(3)    | output              | torch.float32 |         | 0.1443623         | 0.4190856        | 0.3409885      | 0.0052279             | torch.Size([2, 512, 1])          |
| 1385    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(3)            | input               | torch.float32 |         | 0.1443623         | 0.4190856        | 0.3409885      | 0.0052279             | torch.Size([2, 512, 1])          |
| 1385    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(3)            | output              | torch.float32 |         | 1.5446975         | 2.6318314        | 1.7514114      | 0.0578448             | torch.Size([2, 512, 1])          |
| 1386    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(3)          | input_0             | torch.float32 |         | -0.3945178        | 2.7781553        | -0.0000000     | 0.3409989             | torch.Size([2, 512, 32])         |
| 1386    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(3)          | input_1             | torch.float32 |         | 1.5446975         | 2.6318314        | 1.7514114      | 0.0578448             | torch.Size([2, 512, 1])          |
| 1386    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(3)          | output              | torch.float32 |         | -0.7589675        | 4.7803206        | 0.0000000      | 0.9999992             | torch.Size([2, 512, 32])         |
| 1387    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(3)     | input               | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 1387    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(3)     | output              | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 1388    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(3)       | input_0             | torch.float32 |         | -0.7589675        | 4.7803206        | 0.0000000      | 0.9999992             | torch.Size([2, 512, 32])         |
| 1388    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(3)       | input_1             | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 1388    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(3)       | output              | torch.float32 |         | -1.1147975        | 4.1528516        | -0.0632855     | 0.8668421             | torch.Size([2, 512, 32])         |
| 1389    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(3)       | input               | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 1389    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(3)       | output              | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 1390    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(3)         | input_0             | torch.float32 |         | -1.1147975        | 4.1528516        | -0.0632855     | 0.8668421             | torch.Size([2, 512, 32])         |
| 1390    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(3)         | input_1             | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 1390    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(3)         | output              | torch.float32 |         | -0.9324121        | 4.0449653        | 0.0170936      | 0.7778260             | torch.Size([2, 512, 32])         |
| 1391    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1391    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -2.3441241        | 0.7142177        | -0.2230550     | 0.4365802             | torch.Size([2, 512, 3])          |
| 1392    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(3)                   | input               | torch.float32 |         | -2.3441241        | 0.7142177        | -0.2230550     | 0.4365802             | torch.Size([2, 512, 3])          |
| 1392    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(3)                   | weight              | torch.float32 |         | -1.0475703        | 0.9848034        | -0.0054673     | 0.2080412             | torch.Size([64, 3])              |
| 1392    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(3)                   | bias                | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1294928             | torch.Size([64])                 |
| 1392    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(3)                   | output              | torch.float32 |         | -2.0702062        | 1.5250528        | -0.0823897     | 0.3034713             | torch.Size([2, 512, 64])         |
| 1393    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(3)                   | input               | torch.float32 |         | 0.0000000         | 1.5250528        | 0.1736128      | 0.0669879             | torch.Size([2, 512, 64])         |
| 1393    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(3)                   | output              | torch.float32 |         | 0.0000000         | 1.5250528        | 0.1736128      | 0.0669879             | torch.Size([2, 512, 64])         |
| 1394    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 1.5250528        | 0.1736128      | 0.0669879             | torch.Size([2, 512, 64])         |
| 1394    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(3)   | output              | torch.float32 |         | 0.1238686         | 0.2950672        | 0.1736128      | 0.0048773             | torch.Size([2, 512, 1])          |
| 1395    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 1.5250528        | 0.1736128      | 0.0669879             | torch.Size([2, 512, 64])         |
| 1395    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(3)               | input_1             | torch.float32 |         | 0.1238686         | 0.2950672        | 0.1736128      | 0.0048773             | torch.Size([2, 512, 1])          |
| 1395    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(3)               | output              | torch.float32 |         | -0.2950672        | 1.2299856        | -0.0000000     | 0.0621152             | torch.Size([2, 512, 64])         |
| 1396    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(3)               | input_0             | torch.float32 |         | -0.2950672        | 1.2299856        | -0.0000000     | 0.0621152             | torch.Size([2, 512, 64])         |
| 1396    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(3)               | input_1             | torch.float32 |         | -0.2950672        | 1.2299856        | -0.0000000     | 0.0621152             | torch.Size([2, 512, 64])         |
| 1396    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(3)               | output              | torch.float32 |         | 0.0000000         | 1.5128646        | 0.0621143      | 0.0233512             | torch.Size([2, 512, 64])         |
| 1397    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 1.5128646        | 0.0621143      | 0.0233512             | torch.Size([2, 512, 64])         |
| 1397    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(3)     | output              | torch.float32 |         | 0.0269716         | 0.1516432        | 0.0621143      | 0.0026336             | torch.Size([2, 512, 1])          |
| 1398    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(3)             | input               | torch.float32 |         | 0.0269716         | 0.1516432        | 0.0621143      | 0.0026336             | torch.Size([2, 512, 1])          |
| 1398    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(3)             | output              | torch.float32 |         | 2.5678766         | 6.0878773        | 4.8286977      | 1.8019134             | torch.Size([2, 512, 1])          |
| 1399    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(3)           | input_0             | torch.float32 |         | -0.2950672        | 1.2299856        | -0.0000000     | 0.0621152             | torch.Size([2, 512, 64])         |
| 1399    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(3)           | input_1             | torch.float32 |         | 2.5678766         | 6.0878773        | 4.8286977      | 1.8019134             | torch.Size([2, 512, 1])          |
| 1399    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(3)           | output              | torch.float32 |         | -0.8120650        | 3.4035282        | -0.0000000     | 0.9997640             | torch.Size([2, 512, 64])         |
| 1400    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(3)      | input               | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 1400    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(3)      | output              | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 1401    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(3)        | input_0             | torch.float32 |         | -0.8120650        | 3.4035282        | -0.0000000     | 0.9997640             | torch.Size([2, 512, 64])         |
| 1401    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(3)        | input_1             | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 1401    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(3)        | output              | torch.float32 |         | -0.9010212        | 3.3112395        | 0.0114385      | 0.9451862             | torch.Size([2, 512, 64])         |
| 1402    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(3)        | input               | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 1402    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(3)        | output              | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 1403    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(3)          | input_0             | torch.float32 |         | -0.9010212        | 3.3112395        | 0.0114385      | 0.9451862             | torch.Size([2, 512, 64])         |
| 1403    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(3)          | input_1             | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 1403    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(3)          | output              | torch.float32 |         | -0.8941728        | 3.2652428        | 0.0418925      | 0.8523017             | torch.Size([2, 512, 64])         |
| 1404    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(3)                   | input               | torch.float32 |         | -0.8941728        | 3.2652428        | 0.0418925      | 0.8523017             | torch.Size([2, 512, 64])         |
| 1404    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(3)                   | weight              | torch.float32 |         | -0.4523612        | 0.4813256        | -0.0014562     | 0.0096743             | torch.Size([64, 64])             |
| 1404    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(3)                   | bias                | torch.float32 |         | -0.1183558        | 0.2243176        | 0.0150283      | 0.0049289             | torch.Size([64])                 |
| 1404    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(3)                   | output              | torch.float32 |         | -5.3252602        | 2.7271533        | -0.4229335     | 2.2066894             | torch.Size([2, 512, 64])         |
| 1405    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(3)                   | input               | torch.float32 |         | 0.0000000         | 2.7271533        | 0.3294746      | 0.2196693             | torch.Size([2, 512, 64])         |
| 1405    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(3)                   | output              | torch.float32 |         | 0.0000000         | 2.7271533        | 0.3294746      | 0.2196693             | torch.Size([2, 512, 64])         |
| 1406    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 2.7271533        | 0.3294746      | 0.2196693             | torch.Size([2, 512, 64])         |
| 1406    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(3)   | output              | torch.float32 |         | 0.2092162         | 0.5406333        | 0.3294746      | 0.0075090             | torch.Size([2, 512, 1])          |
| 1407    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 2.7271533        | 0.3294746      | 0.2196693             | torch.Size([2, 512, 64])         |
| 1407    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(3)               | input_1             | torch.float32 |         | 0.2092162         | 0.5406333        | 0.3294746      | 0.0075090             | torch.Size([2, 512, 1])          |
| 1407    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(3)               | output              | torch.float32 |         | -0.5406333        | 2.3103452        | 0.0000000      | 0.2121675             | torch.Size([2, 512, 64])         |
| 1408    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(3)               | input_0             | torch.float32 |         | -0.5406333        | 2.3103452        | 0.0000000      | 0.2121675             | torch.Size([2, 512, 64])         |
| 1408    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(3)               | input_1             | torch.float32 |         | -0.5406333        | 2.3103452        | 0.0000000      | 0.2121675             | torch.Size([2, 512, 64])         |
| 1408    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(3)               | output              | torch.float32 |         | 0.0000000         | 5.3376946        | 0.2121643      | 0.2204593             | torch.Size([2, 512, 64])         |
| 1409    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 5.3376946        | 0.2121643      | 0.2204593             | torch.Size([2, 512, 64])         |
| 1409    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(3)     | output              | torch.float32 |         | 0.0817346         | 0.4854635        | 0.2121643      | 0.0074881             | torch.Size([2, 512, 1])          |
| 1410    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(3)             | input               | torch.float32 |         | 0.0817346         | 0.4854635        | 0.2121643      | 0.0074881             | torch.Size([2, 512, 1])          |
| 1410    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(3)             | output              | torch.float32 |         | 1.4352160         | 3.4976022        | 2.3763118      | 0.4417493             | torch.Size([2, 512, 1])          |
| 1411    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(3)           | input_0             | torch.float32 |         | -0.5406333        | 2.3103452        | 0.0000000      | 0.2121675             | torch.Size([2, 512, 64])         |
| 1411    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(3)           | input_1             | torch.float32 |         | 1.4352160         | 3.4976022        | 2.3763118      | 0.4417493             | torch.Size([2, 512, 1])          |
| 1411    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(3)           | output              | torch.float32 |         | -0.8829210        | 4.2689910        | 0.0000000      | 0.9999544             | torch.Size([2, 512, 64])         |
| 1412    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(3)      | input               | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 1412    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(3)      | output              | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 1413    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(3)        | input_0             | torch.float32 |         | -0.8829210        | 4.2689910        | 0.0000000      | 0.9999544             | torch.Size([2, 512, 64])         |
| 1413    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(3)        | input_1             | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 1413    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(3)        | output              | torch.float32 |         | -0.9444834        | 4.1645532        | 0.0038223      | 0.9862469             | torch.Size([2, 512, 64])         |
| 1414    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(3)        | input               | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1414    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(3)        | output              | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1415    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(3)          | input_0             | torch.float32 |         | -0.9444834        | 4.1645532        | 0.0038223      | 0.9862469             | torch.Size([2, 512, 64])         |
| 1415    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(3)          | input_1             | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1415    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(3)          | output              | torch.float32 |         | -0.9078324        | 4.1512289        | 0.0203166      | 0.9397097             | torch.Size([2, 512, 64])         |
| 1416    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(3)                   | input               | torch.float32 |         | -0.9078324        | 4.1512289        | 0.0203166      | 0.9397097             | torch.Size([2, 512, 64])         |
| 1416    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(3)                   | weight              | torch.float32 |         | -0.5707353        | 0.3620123        | -0.0010372     | 0.0088292             | torch.Size([64, 64])             |
| 1416    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(3)                   | bias                | torch.float32 |         | -0.1720246        | 0.1340137        | -0.0235144     | 0.0050507             | torch.Size([64])                 |
| 1416    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(3)                   | output              | torch.float32 |         | -5.3872738        | 3.7277455        | -0.3527153     | 2.1764448             | torch.Size([2, 512, 64])         |
| 1417    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(3)                   | input               | torch.float32 |         | 0.0000000         | 3.7277455        | 0.4485282      | 0.5077130             | torch.Size([2, 512, 64])         |
| 1417    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(3)                   | output              | torch.float32 |         | 0.0000000         | 3.7277455        | 0.4485282      | 0.5077130             | torch.Size([2, 512, 64])         |
| 1418    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(3)   | input_0             | torch.float32 |         | 0.0000000         | 3.7277455        | 0.4485282      | 0.5077130             | torch.Size([2, 512, 64])         |
| 1418    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(3)   | output              | torch.float32 |         | 0.3596318         | 0.5183737        | 0.4485282      | 0.0027496             | torch.Size([2, 512, 1])          |
| 1419    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(3)               | input_0             | torch.float32 |         | 0.0000000         | 3.7277455        | 0.4485282      | 0.5077130             | torch.Size([2, 512, 64])         |
| 1419    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(3)               | input_1             | torch.float32 |         | 0.3596318         | 0.5183737        | 0.4485282      | 0.0027496             | torch.Size([2, 512, 1])          |
| 1419    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(3)               | output              | torch.float32 |         | -0.5183737        | 3.2345958        | -0.0000000     | 0.5049660             | torch.Size([2, 512, 64])         |
| 1420    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(3)               | input_0             | torch.float32 |         | -0.5183737        | 3.2345958        | -0.0000000     | 0.5049660             | torch.Size([2, 512, 64])         |
| 1420    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(3)               | input_1             | torch.float32 |         | -0.5183737        | 3.2345958        | -0.0000000     | 0.5049660             | torch.Size([2, 512, 64])         |
| 1420    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(3)               | output              | torch.float32 |         | 0.0000000         | 10.4626102       | 0.5049583      | 1.1207203             | torch.Size([2, 512, 64])         |
| 1421    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(3)     | input_0             | torch.float32 |         | 0.0000000         | 10.4626102       | 0.5049583      | 1.1207203             | torch.Size([2, 512, 64])         |
| 1421    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(3)     | output              | torch.float32 |         | 0.3080547         | 0.7318137        | 0.5049583      | 0.0145118             | torch.Size([2, 512, 1])          |
| 1422    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(3)             | input               | torch.float32 |         | 0.3080547         | 0.7318137        | 0.5049583      | 0.0145118             | torch.Size([2, 512, 1])          |
| 1422    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(3)             | output              | torch.float32 |         | 1.1689522         | 1.8016856        | 1.4428606      | 0.0388652             | torch.Size([2, 512, 1])          |
| 1423    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(3)           | input_0             | torch.float32 |         | -0.5183737        | 3.2345958        | -0.0000000     | 0.5049660             | torch.Size([2, 512, 64])         |
| 1423    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(3)           | input_1             | torch.float32 |         | 1.1689522         | 1.8016856        | 1.4428606      | 0.0388652             | torch.Size([2, 512, 1])          |
| 1423    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(3)           | output              | torch.float32 |         | -0.6848300        | 4.1674652        | -0.0000000     | 0.9999940             | torch.Size([2, 512, 64])         |
| 1424    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(3)      | input               | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1424    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(3)      | output              | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1425    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(3)        | input_0             | torch.float32 |         | -0.6848300        | 4.1674652        | -0.0000000     | 0.9999940             | torch.Size([2, 512, 64])         |
| 1425    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(3)        | input_1             | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1425    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(3)        | output              | torch.float32 |         | -0.7872369        | 4.3090296        | 0.0063969      | 1.0027894             | torch.Size([2, 512, 64])         |
| 1426    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(3)        | input               | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1426    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(3)        | output              | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1427    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(3)          | input_0             | torch.float32 |         | -0.7872369        | 4.3090296        | 0.0063969      | 1.0027894             | torch.Size([2, 512, 64])         |
| 1427    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(3)          | input_1             | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1427    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(3)          | output              | torch.float32 |         | -0.7629828        | 4.2946591        | 0.0196797      | 0.9825010             | torch.Size([2, 512, 64])         |
| 1428    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(3)                   | input               | torch.float32 |         | -0.7629828        | 4.2946591        | 0.0196797      | 0.9825010             | torch.Size([2, 512, 64])         |
| 1428    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(3)                   | weight              | torch.float32 |         | -0.5701389        | 0.3477888        | 0.0006721      | 0.0085883             | torch.Size([64, 64])             |
| 1428    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(3)                   | bias                | torch.float32 |         | -0.1677032        | 0.1709885        | -0.0237130     | 0.0070098             | torch.Size([64])                 |
| 1428    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(3)                   | output              | torch.float32 |         | -4.7939820        | 7.2184930        | -0.5033373     | 1.8061064             | torch.Size([2, 512, 64])         |
| 1429    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(3)                  | input               | torch.float32 |         | 0.0000000         | 7.2184930        | 0.2554396      | 0.6805324             | torch.Size([2, 512, 64])         |
| 1429    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(3)                  | output              | torch.float32 |         | 0.0000000         | 7.2184930        | 0.2554396      | 0.6805324             | torch.Size([2, 512, 64])         |
| 1430    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(3)  | input_0             | torch.float32 |         | 0.0000000         | 7.2184930        | 0.2554396      | 0.6805324             | torch.Size([2, 512, 64])         |
| 1430    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(3)  | output              | torch.float32 |         | 0.2031514         | 0.3341205        | 0.2554396      | 0.0013311             | torch.Size([2, 512, 1])          |
| 1431    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(3)              | input_0             | torch.float32 |         | 0.0000000         | 7.2184930        | 0.2554396      | 0.6805324             | torch.Size([2, 512, 64])         |
| 1431    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(3)              | input_1             | torch.float32 |         | 0.2031514         | 0.3341205        | 0.2554396      | 0.0013311             | torch.Size([2, 512, 1])          |
| 1431    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(3)              | output              | torch.float32 |         | -0.3341205        | 7.0128517        | -0.0000000     | 0.6792026             | torch.Size([2, 512, 64])         |
| 1432    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(3)              | input_0             | torch.float32 |         | -0.3341205        | 7.0128517        | -0.0000000     | 0.6792026             | torch.Size([2, 512, 64])         |
| 1432    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(3)              | input_1             | torch.float32 |         | -0.3341205        | 7.0128517        | -0.0000000     | 0.6792026             | torch.Size([2, 512, 64])         |
| 1432    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(3)              | output              | torch.float32 |         | 0.0000000         | 49.1800880       | 0.6791922      | 19.7820835            | torch.Size([2, 512, 64])         |
| 1433    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(3)    | input_0             | torch.float32 |         | 0.0000000         | 49.1800880       | 0.6791922      | 19.7820835            | torch.Size([2, 512, 64])         |
| 1433    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(3)    | output              | torch.float32 |         | 0.3924933         | 0.8296916        | 0.6791922      | 0.0122106             | torch.Size([2, 512, 1])          |
| 1434    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(3)            | input               | torch.float32 |         | 0.3924933         | 0.8296916        | 0.6791922      | 0.0122106             | torch.Size([2, 512, 1])          |
| 1434    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(3)            | output              | torch.float32 |         | 1.0978400         | 1.5961670        | 1.2265137      | 0.0114627             | torch.Size([2, 512, 1])          |
| 1435    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(3)          | input_0             | torch.float32 |         | -0.3341205        | 7.0128517        | -0.0000000     | 0.6792026             | torch.Size([2, 512, 64])         |
| 1435    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(3)          | input_1             | torch.float32 |         | 1.0978400         | 1.5961670        | 1.2265137      | 0.0114627             | torch.Size([2, 512, 1])          |
| 1435    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(3)          | output              | torch.float32 |         | -0.4688481        | 7.7557592        | 0.0000000      | 1.0000001             | torch.Size([2, 512, 64])         |
| 1436    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(3)     | input               | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1436    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(3)     | output              | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1437    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(3)       | input_0             | torch.float32 |         | -0.4688481        | 7.7557592        | 0.0000000      | 1.0000001             | torch.Size([2, 512, 64])         |
| 1437    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(3)       | input_1             | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1437    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(3)       | output              | torch.float32 |         | -0.6012976        | 5.7832408        | -0.0334846     | 0.6998010             | torch.Size([2, 512, 64])         |
| 1438    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(3)       | input               | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1438    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(3)       | output              | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1439    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(3)         | input_0             | torch.float32 |         | -0.6012976        | 5.7832408        | -0.0334846     | 0.6998010             | torch.Size([2, 512, 64])         |
| 1439    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(3)         | input_1             | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1439    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(3)         | output              | torch.float32 |         | -0.5986651        | 5.6909342        | 0.0565208      | 0.6122350             | torch.Size([2, 512, 64])         |
| 1440    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_0             | torch.float32 |         | -0.8532427        | 7.5853052        | 0.0714943      | 0.8688896             | torch.Size([2, 512, 128])        |
| 1440    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_1             | torch.float32 |         | -1.7289910        | 5.8493681        | 0.0064566      | 1.3257949             | torch.Size([2, 512, 32])         |
| 1440    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_2             | torch.float32 |         | -0.9324121        | 4.0449653        | 0.0170936      | 0.7778260             | torch.Size([2, 512, 32])         |
| 1440    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | input_3             | torch.float32 |         | -0.5986651        | 5.6909342        | 0.0565208      | 0.6122350             | torch.Size([2, 512, 64])         |
| 1440    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(3)                        | output              | torch.float32 |         | -1.7289910        | 7.5853052        | 0.0528211      | 0.8510517             | torch.Size([2, 512, 256])        |
| 1441    | torch.nn.modules.linear.Linear                                                    | head.fc_before(4)                                 | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 1441    | torch.nn.modules.linear.Linear                                                    | head.fc_before(4)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 1441    | torch.nn.modules.linear.Linear                                                    | head.fc_before(4)                                 | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 1442    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.14.query_cat                          | input_0             | torch.float32 |         | -5.6462903        | 3.6429195        | 0.0014595      | 0.8091727             | torch.Size([2, 512, 256])        |
| 1442    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.14.query_cat                          | input_1             | torch.float32 |         | -1.7289910        | 7.5853052        | 0.0528211      | 0.8510517             | torch.Size([2, 512, 256])        |
| 1442    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.14.query_cat                          | output              | torch.float32 |         | -5.6462903        | 7.5853052        | 0.0271403      | 0.8307702             | torch.Size([2, 512, 512])        |
| 1443    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.14.key_cat                            | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 1443    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.14.key_cat                            | input_1             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0508909      | 0.8514420             | torch.Size([2, 256, 256])        |
| 1443    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.14.key_cat                            | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 1444    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -5.6462903        | 7.5853052        | 0.0271403      | 0.8307702             | torch.Size([2, 512, 512])        |
| 1444    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -5.6462903        | 7.5853052        | 0.0271403      | 0.8307702             | torch.Size([512, 2, 512])        |
| 1445    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 1445    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1446    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 1446    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1447    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | input_0             | torch.float32 |         | -5.6462903        | 7.5853052        | 0.0271403      | 0.8307702             | torch.Size([512, 2, 512])        |
| 1447    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | output              | torch.float32 |         | -5.6462903        | 7.5853052        | 0.0271403      | 0.8307702             | torch.Size([512, 2, 512])        |
| 1448    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1448    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1449    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1449    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1450    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.q_proj                        | input               | torch.float32 |         | -5.6462903        | 7.5853052        | 0.0271403      | 0.8307702             | torch.Size([512, 2, 512])        |
| 1450    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.q_proj                        | weight              | torch.float32 |         | -0.2777553        | 0.2990031        | 0.0002842      | 0.0034354             | torch.Size([512, 512])           |
| 1450    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.q_proj                        | bias                | torch.float32 |         | -0.1035601        | 0.1086727        | -0.0026900     | 0.0010697             | torch.Size([512])                |
| 1450    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.q_proj                        | output              | torch.float32 |         | -15.8403912       | 15.7994967       | -0.0721584     | 12.8691454            | torch.Size([512, 2, 512])        |
| 1451    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.k_proj                        | input               | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1451    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.k_proj                        | weight              | torch.float32 |         | -0.3452844        | 0.4038241        | 0.0001369      | 0.0035582             | torch.Size([512, 512])           |
| 1451    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.k_proj                        | bias                | torch.float32 |         | -0.0042569        | 0.0036242        | -0.0000186     | 0.0000007             | torch.Size([512])                |
| 1451    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.k_proj                        | output              | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([256, 2, 512])        |
| 1452    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.v_proj                        | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1452    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.v_proj                        | weight              | torch.float32 |         | -0.2388043        | 0.2738543        | 0.0000625      | 0.0012634             | torch.Size([512, 512])           |
| 1452    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.v_proj                        | bias                | torch.float32 |         | -0.0574798        | 0.0562508        | -0.0010481     | 0.0004109             | torch.Size([512])                |
| 1452    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.v_proj                        | output              | torch.float32 |         | -0.0574798        | 0.0562508        | -0.0010481     | 0.0004101             | torch.Size([256, 2, 512])        |
| 1453    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | input_0             | torch.float32 |         | -15.8403912       | 15.7994967       | -0.0721584     | 12.8691454            | torch.Size([512, 2, 512])        |
| 1453    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | output              | torch.float32 |         | -15.8403912       | 15.7994967       | -0.0721584     | 12.8691454            | torch.Size([512, 16, 64])        |
| 1454    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -15.8403912       | 15.7994967       | -0.0721584     | 12.8691454            | torch.Size([512, 16, 64])        |
| 1454    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -15.8403912       | 15.7994967       | -0.0721584     | 12.8691454            | torch.Size([16, 512, 64])        |
| 1455    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | input_0             | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([256, 2, 512])        |
| 1455    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | output              | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([256, 16, 64])        |
| 1456    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([256, 16, 64])        |
| 1456    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([16, 256, 64])        |
| 1457    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | input_0             | torch.float32 |         | -0.0574798        | 0.0562508        | -0.0010481     | 0.0004101             | torch.Size([256, 2, 512])        |
| 1457    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | output              | torch.float32 |         | -0.0574798        | 0.0562508        | -0.0010481     | 0.0004101             | torch.Size([256, 16, 64])        |
| 1458    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -0.0574798        | 0.0562508        | -0.0010481     | 0.0004101             | torch.Size([256, 16, 64])        |
| 1458    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -0.0574798        | 0.0562508        | -0.0010481     | 0.0004101             | torch.Size([16, 256, 64])        |
| 1459    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.14.attn.q_scale_mul                   | input_0             | torch.float32 |         | -15.8403912       | 15.7994967       | -0.0721584     | 12.8691454            | torch.Size([16, 512, 64])        |
| 1459    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.14.attn.q_scale_mul                   | output              | torch.float32 |         | -1.9800489        | 1.9749371        | -0.0090198     | 0.2010804             | torch.Size([16, 512, 64])        |
| 1460    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([16, 256, 64])        |
| 1460    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([16, 64, 256])        |
| 1461    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.14.attn.matmul                        | input_0             | torch.float32 |         | -1.9800489        | 1.9749371        | -0.0090198     | 0.2010804             | torch.Size([16, 512, 64])        |
| 1461    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.14.attn.matmul                        | input_1             | torch.float32 |         | -5.3303566        | 6.5564733        | 0.1573640      | 5.0638294             | torch.Size([16, 64, 256])        |
| 1461    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.14.attn.matmul                        | output              | torch.float32 |         | -138.9970245      | 109.0781784      | -10.2529650    | 1335.7073975          | torch.Size([16, 512, 256])       |
| 1462    | torch.Tensor.max                                                                  | head.layers.14.attn.softmax                       | input               | torch.float32 |         | -138.9970245      | 109.0781784      | -10.2529650    | 1335.7073975          | torch.Size([16, 512, 256])       |
| 1462    | torch.Tensor.max                                                                  | head.layers.14.attn.softmax                       | output_0            | torch.float32 |         | -138.9970245      | 109.0781784      | -10.2529650    | 1335.8697510          | torch.Size([16, 512, 1])         |
| 1462    | torch.Tensor.max                                                                  | head.layers.14.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1463    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.14.attn.softmax.sub                   | input_0             | torch.float32 |         | -138.9970245      | 109.0781784      | -10.2529650    | 1335.7073975          | torch.Size([16, 512, 256])       |
| 1463    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.14.attn.softmax.sub                   | input_1             | torch.float32 |         | -138.9970245      | 109.0781784      | -10.2529650    | 1335.8697510          | torch.Size([16, 512, 1])         |
| 1463    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.14.attn.softmax.sub                   | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1464    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.14.attn.softmax.exp                   | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1464    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.14.attn.softmax.exp                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1465    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.14.attn.softmax.sum                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1465    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.14.attn.softmax.sum                   | output              | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 1466    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.14.attn.softmax.reciprocal            | input               | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 1466    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.14.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1467    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.14.attn.softmax.mul                   | input_0             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1467    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.14.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1467    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.14.attn.softmax.mul                   | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1468    | torch.nn.modules.dropout.Dropout                                                  | head.layers.14.attn.attention_drop                | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1468    | torch.nn.modules.dropout.Dropout                                                  | head.layers.14.attn.attention_drop                | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1469    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.14.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1469    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.14.attn.attn_matmul                   | input_1             | torch.float32 |         | -0.0574798        | 0.0562508        | -0.0010481     | 0.0004101             | torch.Size([16, 256, 64])        |
| 1469    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.14.attn.attn_matmul                   | output              | torch.float32 |         | -0.0574799        | 0.0562506        | -0.0010481     | 0.0004101             | torch.Size([16, 512, 64])        |
| 1470    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -0.0574799        | 0.0562506        | -0.0010481     | 0.0004101             | torch.Size([16, 512, 64])        |
| 1470    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -0.0574799        | 0.0562506        | -0.0010481     | 0.0004101             | torch.Size([512, 16, 64])        |
| 1471    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | input_0             | torch.float32 |         | -0.0574799        | 0.0562506        | -0.0010481     | 0.0004101             | torch.Size([512, 16, 64])        |
| 1471    | torch.Tensor.reshape                                                              | head.layers.14.attn                               | output              | torch.float32 |         | -0.0574799        | 0.0562506        | -0.0010481     | 0.0004101             | torch.Size([512, 2, 512])        |
| 1472    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.out_proj                      | input               | torch.float32 |         | -0.0574799        | 0.0562506        | -0.0010481     | 0.0004101             | torch.Size([512, 2, 512])        |
| 1472    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.out_proj                      | weight              | torch.float32 |         | -0.1960477        | 0.2013985        | -0.0001637     | 0.0022644             | torch.Size([512, 512])           |
| 1472    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.out_proj                      | bias                | torch.float32 |         | -0.2318651        | 0.2497024        | 0.0100625      | 0.0055016             | torch.Size([512])                |
| 1472    | torch.nn.modules.linear.Linear                                                    | head.layers.14.attn.out_proj                      | output              | torch.float32 |         | -0.4594636        | 0.3937162        | 0.0141519      | 0.0128574             | torch.Size([512, 2, 512])        |
| 1473    | torch.Tensor.view                                                                 | head.layers.14.attn                               | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1473    | torch.Tensor.view                                                                 | head.layers.14.attn                               | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 1474    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.14.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 1474    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.14.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 512, 256])        |
| 1475    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | input_0             | torch.float32 |         | -0.4594636        | 0.3937162        | 0.0141519      | 0.0128574             | torch.Size([512, 2, 512])        |
| 1475    | torch.Tensor.transpose                                                            | head.layers.14.attn                               | output              | torch.float32 |         | -0.4594636        | 0.3937162        | 0.0141519      | 0.0128574             | torch.Size([2, 512, 512])        |
| 1476    | torch.nn.modules.dropout.Dropout                                                  | head.layers.14.dropout                            | input               | torch.float32 |         | -0.4594636        | 0.3937162        | 0.0141519      | 0.0128574             | torch.Size([2, 512, 512])        |
| 1476    | torch.nn.modules.dropout.Dropout                                                  | head.layers.14.dropout                            | output              | torch.float32 |         | -0.4594636        | 0.3937162        | 0.0141519      | 0.0128574             | torch.Size([2, 512, 512])        |
| 1477    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.14.add                                | input_0             | torch.float32 |         | -5.6462903        | 7.5853052        | 0.0271403      | 0.8307702             | torch.Size([2, 512, 512])        |
| 1477    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.14.add                                | input_1             | torch.float32 |         | -0.4594636        | 0.3937162        | 0.0141519      | 0.0128574             | torch.Size([2, 512, 512])        |
| 1477    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.14.add                                | output              | torch.float32 |         | -5.3094716        | 7.5190902        | 0.0412922      | 0.7855468             | torch.Size([2, 512, 512])        |
| 1478    | torch.nn.modules.linear.Linear                                                    | head.fc_after(4)                                  | input               | torch.float32 |         | -5.3094716        | 7.5190902        | 0.0412922      | 0.7855468             | torch.Size([2, 512, 512])        |
| 1478    | torch.nn.modules.linear.Linear                                                    | head.fc_after(4)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 1478    | torch.nn.modules.linear.Linear                                                    | head.fc_after(4)                                  | output              | torch.float32 |         | -7.7025394        | 8.3414631        | -0.0113865     | 0.9734664             | torch.Size([2, 512, 256])        |
| 1479    | torch.nn.modules.linear.Linear                                                    | head.fc_before(5)                                 | input               | torch.float32 |         | -7.7025394        | 8.3414631        | -0.0113865     | 0.9734664             | torch.Size([2, 512, 256])        |
| 1479    | torch.nn.modules.linear.Linear                                                    | head.fc_before(5)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 1479    | torch.nn.modules.linear.Linear                                                    | head.fc_before(5)                                 | output              | torch.float32 |         | -4.3171563        | 3.9669867        | 0.0002934      | 0.0508324             | torch.Size([2, 512, 512])        |
| 1480    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.15.query_cat                          | input_0             | torch.float32 |         | -7.7025394        | 8.3414631        | -0.0113865     | 0.9734664             | torch.Size([2, 512, 256])        |
| 1480    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.15.query_cat                          | input_1             | torch.float32 |         | -1.7289910        | 7.5853052        | 0.0528211      | 0.8510517             | torch.Size([2, 512, 256])        |
| 1480    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.15.query_cat                          | output              | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([2, 512, 512])        |
| 1481    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.15.key_cat                            | input_0             | torch.float32 |         | -7.7025394        | 8.3414631        | -0.0113865     | 0.9734664             | torch.Size([2, 512, 256])        |
| 1481    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.15.key_cat                            | input_1             | torch.float32 |         | -1.7289910        | 7.5853052        | 0.0528211      | 0.8510517             | torch.Size([2, 512, 256])        |
| 1481    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.15.key_cat                            | output              | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([2, 512, 512])        |
| 1482    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([2, 512, 512])        |
| 1482    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1483    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([2, 512, 512])        |
| 1483    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1484    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -4.3171563        | 3.9669867        | 0.0002934      | 0.0508324             | torch.Size([2, 512, 512])        |
| 1484    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -4.3171563        | 3.9669867        | 0.0002934      | 0.0508324             | torch.Size([512, 2, 512])        |
| 1485    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | input_0             | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1485    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | output              | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1486    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | input_0             | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1486    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | output              | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1487    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | input_0             | torch.float32 |         | -4.3171563        | 3.9669867        | 0.0002934      | 0.0508324             | torch.Size([512, 2, 512])        |
| 1487    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | output              | torch.float32 |         | -4.3171563        | 3.9669867        | 0.0002934      | 0.0508324             | torch.Size([512, 2, 512])        |
| 1488    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.q_proj                        | input               | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1488    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.q_proj                        | weight              | torch.float32 |         | -0.3136347        | 0.3103172        | -0.0000785     | 0.0029793             | torch.Size([512, 512])           |
| 1488    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.q_proj                        | bias                | torch.float32 |         | -0.0943940        | 0.0701011        | -0.0003392     | 0.0006187             | torch.Size([512])                |
| 1488    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.q_proj                        | output              | torch.float32 |         | -11.4772882       | 11.9125443       | 0.0020090      | 7.6422443             | torch.Size([512, 2, 512])        |
| 1489    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.k_proj                        | input               | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([512, 2, 512])        |
| 1489    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.k_proj                        | weight              | torch.float32 |         | -0.3332908        | 0.3325517        | -0.0000534     | 0.0031501             | torch.Size([512, 512])           |
| 1489    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.k_proj                        | bias                | torch.float32 |         | -0.1813514        | 0.2414232        | -0.0016250     | 0.0011009             | torch.Size([512])                |
| 1489    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.k_proj                        | output              | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([512, 2, 512])        |
| 1490    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.v_proj                        | input               | torch.float32 |         | -4.3171563        | 3.9669867        | 0.0002934      | 0.0508324             | torch.Size([512, 2, 512])        |
| 1490    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.v_proj                        | weight              | torch.float32 |         | -0.3830613        | 0.3038961        | 0.0000100      | 0.0012182             | torch.Size([512, 512])           |
| 1490    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.v_proj                        | bias                | torch.float32 |         | -0.2282076        | 0.3300797        | 0.0050480      | 0.0049596             | torch.Size([512])                |
| 1490    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.v_proj                        | output              | torch.float32 |         | -2.7193267        | 3.0749669        | -0.0004318     | 0.0736060             | torch.Size([512, 2, 512])        |
| 1491    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | input_0             | torch.float32 |         | -11.4772882       | 11.9125443       | 0.0020090      | 7.6422443             | torch.Size([512, 2, 512])        |
| 1491    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | output              | torch.float32 |         | -11.4772882       | 11.9125443       | 0.0020090      | 7.6422443             | torch.Size([512, 16, 64])        |
| 1492    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -11.4772882       | 11.9125443       | 0.0020090      | 7.6422443             | torch.Size([512, 16, 64])        |
| 1492    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -11.4772882       | 11.9125443       | 0.0020090      | 7.6422443             | torch.Size([16, 512, 64])        |
| 1493    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | input_0             | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([512, 2, 512])        |
| 1493    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | output              | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([512, 16, 64])        |
| 1494    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([512, 16, 64])        |
| 1494    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([16, 512, 64])        |
| 1495    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | input_0             | torch.float32 |         | -2.7193267        | 3.0749669        | -0.0004318     | 0.0736060             | torch.Size([512, 2, 512])        |
| 1495    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | output              | torch.float32 |         | -2.7193267        | 3.0749669        | -0.0004318     | 0.0736060             | torch.Size([512, 16, 64])        |
| 1496    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -2.7193267        | 3.0749669        | -0.0004318     | 0.0736060             | torch.Size([512, 16, 64])        |
| 1496    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -2.7193267        | 3.0749669        | -0.0004318     | 0.0736060             | torch.Size([16, 512, 64])        |
| 1497    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.15.attn.q_scale_mul                   | input_0             | torch.float32 |         | -11.4772882       | 11.9125443       | 0.0020090      | 7.6422443             | torch.Size([16, 512, 64])        |
| 1497    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.15.attn.q_scale_mul                   | output              | torch.float32 |         | -1.4346610        | 1.4890680        | 0.0002511      | 0.1194101             | torch.Size([16, 512, 64])        |
| 1498    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([16, 512, 64])        |
| 1498    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([16, 64, 512])        |
| 1499    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.15.attn.matmul                        | input_0             | torch.float32 |         | -1.4346610        | 1.4890680        | 0.0002511      | 0.1194101             | torch.Size([16, 512, 64])        |
| 1499    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.15.attn.matmul                        | input_1             | torch.float32 |         | -11.2372608       | 11.3232651       | 0.0079457      | 6.7909427             | torch.Size([16, 64, 512])        |
| 1499    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.15.attn.matmul                        | output              | torch.float32 |         | -122.6802063      | 166.0287933      | -2.7214150     | 610.8202515           | torch.Size([16, 512, 512])       |
| 1500    | torch.Tensor.max                                                                  | head.layers.15.attn.softmax                       | input               | torch.float32 |         | -122.6802063      | 166.0287933      | -2.7214150     | 610.8202515           | torch.Size([16, 512, 512])       |
| 1500    | torch.Tensor.max                                                                  | head.layers.15.attn.softmax                       | output_0            | torch.float32 |         | 5.5223331         | 166.0287933      | 46.9127693     | 768.3522949           | torch.Size([16, 512, 1])         |
| 1500    | torch.Tensor.max                                                                  | head.layers.15.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 510.0000000      | 325.6202393    | 12367.8232422         | torch.Size([16, 512, 1])         |
| 1501    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.15.attn.softmax.sub                   | input_0             | torch.float32 |         | -122.6802063      | 166.0287933      | -2.7214150     | 610.8202515           | torch.Size([16, 512, 512])       |
| 1501    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.15.attn.softmax.sub                   | input_1             | torch.float32 |         | 5.5223331         | 166.0287933      | 46.9127693     | 768.3522949           | torch.Size([16, 512, 1])         |
| 1501    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.15.attn.softmax.sub                   | output              | torch.float32 |         | -251.8900757      | 0.0000000        | -49.6341820    | 1273.9515381          | torch.Size([16, 512, 512])       |
| 1502    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.15.attn.softmax.exp                   | input               | torch.float32 |         | -251.8900757      | 0.0000000        | -49.6341820    | 1273.9515381          | torch.Size([16, 512, 512])       |
| 1502    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.15.attn.softmax.exp                   | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0084623      | 0.0056610             | torch.Size([16, 512, 512])       |
| 1503    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.15.attn.softmax.sum                   | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0084623      | 0.0056610             | torch.Size([16, 512, 512])       |
| 1503    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.15.attn.softmax.sum                   | output              | torch.float32 |         | 1.0000000         | 130.6948853      | 4.3327188      | 141.2160797           | torch.Size([16, 512, 1])         |
| 1504    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.15.attn.softmax.reciprocal            | input               | torch.float32 |         | 1.0000000         | 130.6948853      | 4.3327188      | 141.2160797           | torch.Size([16, 512, 1])         |
| 1504    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.15.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0076514         | 1.0000000        | 0.5345378      | 0.0856058             | torch.Size([16, 512, 1])         |
| 1505    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.15.attn.softmax.mul                   | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0084623      | 0.0056610             | torch.Size([16, 512, 512])       |
| 1505    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.15.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0076514         | 1.0000000        | 0.5345378      | 0.0856058             | torch.Size([16, 512, 1])         |
| 1505    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.15.attn.softmax.mul                   | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0008247             | torch.Size([16, 512, 512])       |
| 1506    | torch.nn.modules.dropout.Dropout                                                  | head.layers.15.attn.attention_drop                | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0008247             | torch.Size([16, 512, 512])       |
| 1506    | torch.nn.modules.dropout.Dropout                                                  | head.layers.15.attn.attention_drop                | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0008247             | torch.Size([16, 512, 512])       |
| 1507    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.15.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0008247             | torch.Size([16, 512, 512])       |
| 1507    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.15.attn.attn_matmul                   | input_1             | torch.float32 |         | -2.7193267        | 3.0749669        | -0.0004318     | 0.0736060             | torch.Size([16, 512, 64])        |
| 1507    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.15.attn.attn_matmul                   | output              | torch.float32 |         | -1.9917831        | 2.5110896        | -0.0016625     | 0.0561733             | torch.Size([16, 512, 64])        |
| 1508    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -1.9917831        | 2.5110896        | -0.0016625     | 0.0561733             | torch.Size([16, 512, 64])        |
| 1508    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -1.9917831        | 2.5110896        | -0.0016625     | 0.0561733             | torch.Size([512, 16, 64])        |
| 1509    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | input_0             | torch.float32 |         | -1.9917831        | 2.5110896        | -0.0016625     | 0.0561733             | torch.Size([512, 16, 64])        |
| 1509    | torch.Tensor.reshape                                                              | head.layers.15.attn                               | output              | torch.float32 |         | -1.9917831        | 2.5110896        | -0.0016625     | 0.0561733             | torch.Size([512, 2, 512])        |
| 1510    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.out_proj                      | input               | torch.float32 |         | -1.9917831        | 2.5110896        | -0.0016625     | 0.0561733             | torch.Size([512, 2, 512])        |
| 1510    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.out_proj                      | weight              | torch.float32 |         | -0.2006125        | 0.2132747        | 0.0000258      | 0.0022547             | torch.Size([512, 512])           |
| 1510    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.out_proj                      | bias                | torch.float32 |         | -0.4402698        | 0.3843731        | -0.0079231     | 0.0224835             | torch.Size([512])                |
| 1510    | torch.nn.modules.linear.Linear                                                    | head.layers.15.attn.out_proj                      | output              | torch.float32 |         | -2.2169101        | 1.8838351        | -0.0006991     | 0.1987105             | torch.Size([512, 2, 512])        |
| 1511    | torch.Tensor.view                                                                 | head.layers.15.attn                               | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0008247             | torch.Size([16, 512, 512])       |
| 1511    | torch.Tensor.view                                                                 | head.layers.15.attn                               | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0008247             | torch.Size([2, 8, 512, 512])     |
| 1512    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.15.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0008247             | torch.Size([2, 8, 512, 512])     |
| 1512    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.15.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0000000         | 0.2260913        | 0.0019531      | 0.0001099             | torch.Size([2, 512, 512])        |
| 1513    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | input_0             | torch.float32 |         | -2.2169101        | 1.8838351        | -0.0006991     | 0.1987105             | torch.Size([512, 2, 512])        |
| 1513    | torch.Tensor.transpose                                                            | head.layers.15.attn                               | output              | torch.float32 |         | -2.2169101        | 1.8838351        | -0.0006991     | 0.1987105             | torch.Size([2, 512, 512])        |
| 1514    | torch.nn.modules.dropout.Dropout                                                  | head.layers.15.dropout                            | input               | torch.float32 |         | -2.2169101        | 1.8838351        | -0.0006991     | 0.1987105             | torch.Size([2, 512, 512])        |
| 1514    | torch.nn.modules.dropout.Dropout                                                  | head.layers.15.dropout                            | output              | torch.float32 |         | -2.2169101        | 1.8838351        | -0.0006991     | 0.1987105             | torch.Size([2, 512, 512])        |
| 1515    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.15.add                                | input_0             | torch.float32 |         | -7.7025394        | 8.3414631        | 0.0207173      | 0.9132881             | torch.Size([2, 512, 512])        |
| 1515    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.15.add                                | input_1             | torch.float32 |         | -2.2169101        | 1.8838351        | -0.0006991     | 0.1987105             | torch.Size([2, 512, 512])        |
| 1515    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.15.add                                | output              | torch.float32 |         | -7.6856298        | 7.9399390        | 0.0200182      | 1.1039100             | torch.Size([2, 512, 512])        |
| 1516    | torch.nn.modules.linear.Linear                                                    | head.fc_after(5)                                  | input               | torch.float32 |         | -7.6856298        | 7.9399390        | 0.0200182      | 1.1039100             | torch.Size([2, 512, 512])        |
| 1516    | torch.nn.modules.linear.Linear                                                    | head.fc_after(5)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 1516    | torch.nn.modules.linear.Linear                                                    | head.fc_after(5)                                  | output              | torch.float32 |         | -51.0524788       | 37.2847748       | 0.0328875      | 15.7673254            | torch.Size([2, 512, 256])        |
| 1517    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.16.input_mean.mean                    | input_0             | torch.float32 |         | -51.0524788       | 37.2847748       | 0.0328875      | 15.7673254            | torch.Size([2, 512, 256])        |
| 1517    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.16.input_mean.mean                    | output              | torch.float32 |         | -0.0770542        | 0.1364342        | 0.0328875      | 0.0016914             | torch.Size([2, 512, 1])          |
| 1518    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.16.sub                                | input_0             | torch.float32 |         | -51.0524788       | 37.2847748       | 0.0328875      | 15.7673254            | torch.Size([2, 512, 256])        |
| 1518    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.16.sub                                | input_1             | torch.float32 |         | -0.0770542        | 0.1364342        | 0.0328875      | 0.0016914             | torch.Size([2, 512, 1])          |
| 1518    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.16.sub                                | output              | torch.float32 |         | -51.1251373       | 37.2121162       | 0.0000000      | 15.7656355            | torch.Size([2, 512, 256])        |
| 1519    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.mul                                | input_0             | torch.float32 |         | -51.1251373       | 37.2121162       | 0.0000000      | 15.7656355            | torch.Size([2, 512, 256])        |
| 1519    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.mul                                | input_1             | torch.float32 |         | -51.1251373       | 37.2121162       | 0.0000000      | 15.7656355            | torch.Size([2, 512, 256])        |
| 1519    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.mul                                | output              | torch.float32 |         | 0.0000000         | 2613.7797852     | 15.7655745     | 9951.7500000          | torch.Size([2, 512, 256])        |
| 1520    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.16.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 2613.7797852     | 15.7655745     | 9951.7500000          | torch.Size([2, 512, 256])        |
| 1520    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.16.var_mean.mean                      | output              | torch.float32 |         | 5.6042175         | 39.6573715       | 15.7655754     | 67.0621262            | torch.Size([2, 512, 1])          |
| 1521    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.16.rsqrt                              | input               | torch.float32 |         | 5.6042175         | 39.6573715       | 15.7655754     | 67.0621262            | torch.Size([2, 512, 1])          |
| 1521    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.16.rsqrt                              | output              | torch.float32 |         | 0.1587954         | 0.4224177        | 0.2828934      | 0.0068089             | torch.Size([2, 512, 1])          |
| 1522    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.out_mul                            | input_0             | torch.float32 |         | -51.1251373       | 37.2121162       | 0.0000000      | 15.7656355            | torch.Size([2, 512, 256])        |
| 1522    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.out_mul                            | input_1             | torch.float32 |         | 0.1587954         | 0.4224177        | 0.2828934      | 0.0068089             | torch.Size([2, 512, 1])          |
| 1522    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.out_mul                            | output              | torch.float32 |         | -8.5119762        | 6.2673545        | 0.0000000      | 1.0000030             | torch.Size([2, 512, 256])        |
| 1523    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.16.weight_quant                       | input               | torch.float32 |         | 0.7322687         | 0.9884943        | 0.8490973      | 0.0018859             | torch.Size([256])                |
| 1523    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.16.weight_quant                       | output              | torch.float32 |         | 0.7322687         | 0.9884943        | 0.8490973      | 0.0018859             | torch.Size([256])                |
| 1524    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.weight_mul                         | input_0             | torch.float32 |         | -8.5119762        | 6.2673545        | 0.0000000      | 1.0000030             | torch.Size([2, 512, 256])        |
| 1524    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.weight_mul                         | input_1             | torch.float32 |         | 0.7322687         | 0.9884943        | 0.8490973      | 0.0018859             | torch.Size([256])                |
| 1524    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.16.weight_mul                         | output              | torch.float32 |         | -7.5840654        | 4.8215246        | -0.0026162     | 0.7263891             | torch.Size([2, 512, 256])        |
| 1525    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.16.bias_quant                         | input               | torch.float32 |         | -0.1939087        | 0.1560507        | -0.0045885     | 0.0017009             | torch.Size([256])                |
| 1525    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.16.bias_quant                         | output              | torch.float32 |         | -0.1939087        | 0.1560507        | -0.0045885     | 0.0017009             | torch.Size([256])                |
| 1526    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.16.bias_add                           | input_0             | torch.float32 |         | -7.5840654        | 4.8215246        | -0.0026162     | 0.7263891             | torch.Size([2, 512, 256])        |
| 1526    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.16.bias_add                           | input_1             | torch.float32 |         | -0.1939087        | 0.1560507        | -0.0045885     | 0.0017009             | torch.Size([256])                |
| 1526    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.16.bias_add                           | output              | torch.float32 |         | -7.5335484        | 4.7197452        | -0.0072047     | 0.6925276             | torch.Size([2, 512, 256])        |
| 1527    | torch.nn.modules.linear.Linear                                                    | head.layers.17.kps_generator.offset               | input               | torch.float32 |         | -7.5335484        | 4.7197452        | -0.0072047     | 0.6925276             | torch.Size([2, 512, 256])        |
| 1527    | torch.nn.modules.linear.Linear                                                    | head.layers.17.kps_generator.offset               | weight              | torch.float32 |         | -0.1968990        | 0.1851189        | 0.0002006      | 0.0033782             | torch.Size([24, 256])            |
| 1527    | torch.nn.modules.linear.Linear                                                    | head.layers.17.kps_generator.offset               | bias                | torch.float32 |         | -0.0576364        | 0.0380543        | -0.0028053     | 0.0006696             | torch.Size([24])                 |
| 1527    | torch.nn.modules.linear.Linear                                                    | head.layers.17.kps_generator.offset               | output              | torch.float32 |         | -4.8859859        | 5.1529016        | -0.1385739     | 1.3666580             | torch.Size([2, 512, 24])         |
| 1528    | torch.Tensor.view                                                                 | head.layers.17.kps_generator                      | input_0             | torch.float32 |         | -4.8859859        | 5.1529016        | -0.1385739     | 1.3666580             | torch.Size([2, 512, 24])         |
| 1528    | torch.Tensor.view                                                                 | head.layers.17.kps_generator                      | output              | torch.float32 |         | -4.8859859        | 5.1529016        | -0.1385739     | 1.3666580             | torch.Size([2, 512, 8, 3])       |
| 1529    | torch.Tensor.__getitem__                                                          | head.layers.17.kps_generator                      | input_0             | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1529    | torch.Tensor.__getitem__                                                          | head.layers.17.kps_generator                      | output              | torch.float32 |         | -53.5874481       | 53.7214432       | 0.7427100      | 286.9937744           | torch.Size([2, 512, 1, 3])       |
| 1530    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.kps_generator.keypoints_add        | input_0             | torch.float32 |         | -4.8859859        | 5.1529016        | -0.1385739     | 1.3666580             | torch.Size([2, 512, 8, 3])       |
| 1530    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.kps_generator.keypoints_add        | input_1             | torch.float32 |         | -53.5874481       | 53.7214432       | 0.7427100      | 286.9937744           | torch.Size([2, 512, 1, 3])       |
| 1530    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.kps_generator.keypoints_add        | output              | torch.float32 |         | -55.7241516       | 57.0714684       | 0.6041360      | 289.8799744           | torch.Size([2, 512, 8, 3])       |
| 1531    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.weight_add                         | input_0             | torch.float32 |         | -7.5335484        | 4.7197452        | -0.0072047     | 0.6925276             | torch.Size([2, 512, 256])        |
| 1531    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.weight_add                         | input_1             | torch.float32 |         | -1.7289910        | 7.5853052        | 0.0528211      | 0.8510517             | torch.Size([2, 512, 256])        |
| 1531    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.weight_add                         | output              | torch.float32 |         | -7.9138455        | 7.6449976        | 0.0456164      | 1.4399004             | torch.Size([2, 512, 256])        |
| 1532    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 1532    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 1533    | torch.Tensor.reshape                                                              | head.layers.17                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 1533    | torch.Tensor.reshape                                                              | head.layers.17                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 1534    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.0                   | input               | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 1534    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.0                   | weight              | torch.float32 |         | -0.4340022        | 0.4555438        | -0.0011310     | 0.0120533             | torch.Size([256, 12])            |
| 1534    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.0                   | bias                | torch.float32 |         | -0.3300059        | 0.3633537        | 0.0122508      | 0.0318757             | torch.Size([256])                |
| 1534    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.0                   | output              | torch.float32 |         | -1.1894271        | 1.5382119        | 0.0199212      | 0.2625082             | torch.Size([2, 6, 256])          |
| 1535    | torch.nn.modules.activation.ReLU                                                  | head.layers.17.camera_encoder.1                   | input               | torch.float32 |         | 0.0000000         | 1.5382119        | 0.2255143      | 0.1044576             | torch.Size([2, 6, 256])          |
| 1535    | torch.nn.modules.activation.ReLU                                                  | head.layers.17.camera_encoder.1                   | output              | torch.float32 |         | 0.0000000         | 1.5382119        | 0.2255143      | 0.1044576             | torch.Size([2, 6, 256])          |
| 1536    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 1.5382119        | 0.2255143      | 0.1044576             | torch.Size([2, 6, 256])          |
| 1536    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.input_mean.mean   | output              | torch.float32 |         | 0.1725826         | 0.2587980        | 0.2255143      | 0.0008033             | torch.Size([2, 6, 1])            |
| 1537    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.2.sub               | input_0             | torch.float32 |         | 0.0000000         | 1.5382119        | 0.2255143      | 0.1044576             | torch.Size([2, 6, 256])          |
| 1537    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.2.sub               | input_1             | torch.float32 |         | 0.1725826         | 0.2587980        | 0.2255143      | 0.0008033             | torch.Size([2, 6, 1])            |
| 1537    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.2.sub               | output              | torch.float32 |         | -0.2587980        | 1.2794139        | -0.0000000     | 0.1037210             | torch.Size([2, 6, 256])          |
| 1538    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.mul               | input_0             | torch.float32 |         | -0.2587980        | 1.2794139        | -0.0000000     | 0.1037210             | torch.Size([2, 6, 256])          |
| 1538    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.mul               | input_1             | torch.float32 |         | -0.2587980        | 1.2794139        | -0.0000000     | 0.1037210             | torch.Size([2, 6, 256])          |
| 1538    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.mul               | output              | torch.float32 |         | 0.0000000         | 1.6369001        | 0.1036873      | 0.0379219             | torch.Size([2, 6, 256])          |
| 1539    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.var_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 1.6369001        | 0.1036873      | 0.0379219             | torch.Size([2, 6, 256])          |
| 1539    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.2.var_mean.mean     | output              | torch.float32 |         | 0.0562574         | 0.1334011        | 0.1036873      | 0.0007284             | torch.Size([2, 6, 1])            |
| 1540    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.17.camera_encoder.2.rsqrt             | input               | torch.float32 |         | 0.0562574         | 0.1334011        | 0.1036873      | 0.0007284             | torch.Size([2, 6, 1])            |
| 1540    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.17.camera_encoder.2.rsqrt             | output              | torch.float32 |         | 2.7378149         | 4.2157173        | 3.1973987      | 0.2603011             | torch.Size([2, 6, 1])            |
| 1541    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.out_mul           | input_0             | torch.float32 |         | -0.2587980        | 1.2794139        | -0.0000000     | 0.1037210             | torch.Size([2, 6, 256])          |
| 1541    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.out_mul           | input_1             | torch.float32 |         | 2.7378149         | 4.2157173        | 3.1973987      | 0.2603011             | torch.Size([2, 6, 1])            |
| 1541    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.out_mul           | output              | torch.float32 |         | -0.7310261        | 4.0952663        | -0.0000000     | 1.0002209             | torch.Size([2, 6, 256])          |
| 1542    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.2.weight_quant      | input               | torch.float32 |         | 0.8256041         | 1.2137457        | 0.9921471      | 0.0037993             | torch.Size([256])                |
| 1542    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.2.weight_quant      | output              | torch.float32 |         | 0.8256041         | 1.2137457        | 0.9921471      | 0.0037993             | torch.Size([256])                |
| 1543    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.weight_mul        | input_0             | torch.float32 |         | -0.7310261        | 4.0952663        | -0.0000000     | 1.0002209             | torch.Size([2, 6, 256])          |
| 1543    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.weight_mul        | input_1             | torch.float32 |         | 0.8256041         | 1.2137457        | 0.9921471      | 0.0037993             | torch.Size([256])                |
| 1543    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.2.weight_mul        | output              | torch.float32 |         | -0.8414440        | 4.2961860        | -0.0043525     | 1.0061815             | torch.Size([2, 6, 256])          |
| 1544    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.2.bias_quant        | input               | torch.float32 |         | -0.1173504        | 0.1054403        | -0.0015248     | 0.0022785             | torch.Size([256])                |
| 1544    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.2.bias_quant        | output              | torch.float32 |         | -0.1173504        | 0.1054403        | -0.0015248     | 0.0022785             | torch.Size([256])                |
| 1545    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.2.bias_add          | input_0             | torch.float32 |         | -0.8414440        | 4.2961860        | -0.0043525     | 1.0061815             | torch.Size([2, 6, 256])          |
| 1545    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.2.bias_add          | input_1             | torch.float32 |         | -0.1173504        | 0.1054403        | -0.0015248     | 0.0022785             | torch.Size([256])                |
| 1545    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.2.bias_add          | output              | torch.float32 |         | -0.9426008        | 4.3878088        | -0.0058773     | 1.0512410             | torch.Size([2, 6, 256])          |
| 1546    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.3                   | input               | torch.float32 |         | -0.9426008        | 4.3878088        | -0.0058773     | 1.0512410             | torch.Size([2, 6, 256])          |
| 1546    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.3                   | weight              | torch.float32 |         | -0.4107684        | 0.3999822        | 0.0008692      | 0.0045543             | torch.Size([256, 256])           |
| 1546    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.3                   | bias                | torch.float32 |         | -0.0767870        | 0.2690172        | -0.0036183     | 0.0019012             | torch.Size([256])                |
| 1546    | torch.nn.modules.linear.Linear                                                    | head.layers.17.camera_encoder.3                   | output              | torch.float32 |         | -5.6160278        | 52.2024651       | -0.6557021     | 23.1398849            | torch.Size([2, 6, 256])          |
| 1547    | torch.nn.modules.activation.ReLU                                                  | head.layers.17.camera_encoder.4                   | input               | torch.float32 |         | 0.0000000         | 52.2024651       | 0.8415097      | 19.8525543            | torch.Size([2, 6, 256])          |
| 1547    | torch.nn.modules.activation.ReLU                                                  | head.layers.17.camera_encoder.4                   | output              | torch.float32 |         | 0.0000000         | 52.2024651       | 0.8415097      | 19.8525543            | torch.Size([2, 6, 256])          |
| 1548    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 52.2024651       | 0.8415097      | 19.8525543            | torch.Size([2, 6, 256])          |
| 1548    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.input_mean.mean   | output              | torch.float32 |         | 0.8237170         | 0.8658440        | 0.8415096      | 0.0002696             | torch.Size([2, 6, 1])            |
| 1549    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.5.sub               | input_0             | torch.float32 |         | 0.0000000         | 52.2024651       | 0.8415097      | 19.8525543            | torch.Size([2, 6, 256])          |
| 1549    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.5.sub               | input_1             | torch.float32 |         | 0.8237170         | 0.8658440        | 0.8415096      | 0.0002696             | torch.Size([2, 6, 1])            |
| 1549    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.camera_encoder.5.sub               | output              | torch.float32 |         | -0.8658440        | 51.3713951       | 0.0000000      | 19.8523083            | torch.Size([2, 6, 256])          |
| 1550    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.mul               | input_0             | torch.float32 |         | -0.8658440        | 51.3713951       | 0.0000000      | 19.8523083            | torch.Size([2, 6, 256])          |
| 1550    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.mul               | input_1             | torch.float32 |         | -0.8658440        | 51.3713951       | 0.0000000      | 19.8523083            | torch.Size([2, 6, 256])          |
| 1550    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.mul               | output              | torch.float32 |         | 0.0000006         | 2639.0202637     | 19.8458443     | 31923.4160156         | torch.Size([2, 6, 256])          |
| 1551    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.var_mean.mean     | input_0             | torch.float32 |         | 0.0000006         | 2639.0202637     | 19.8458443     | 31923.4160156         | torch.Size([2, 6, 256])          |
| 1551    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.17.camera_encoder.5.var_mean.mean     | output              | torch.float32 |         | 18.4981041        | 20.9553642       | 19.8458443     | 0.6084757             | torch.Size([2, 6, 1])            |
| 1552    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.17.camera_encoder.5.rsqrt             | input               | torch.float32 |         | 18.4981041        | 20.9553642       | 19.8458443     | 0.6084757             | torch.Size([2, 6, 1])            |
| 1552    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.17.camera_encoder.5.rsqrt             | output              | torch.float32 |         | 0.2184501         | 0.2325071        | 0.2245946      | 0.0000200             | torch.Size([2, 6, 1])            |
| 1553    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.out_mul           | input_0             | torch.float32 |         | -0.8658440        | 51.3713951       | 0.0000000      | 19.8523083            | torch.Size([2, 6, 256])          |
| 1553    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.out_mul           | input_1             | torch.float32 |         | 0.2184501         | 0.2325071        | 0.2245946      | 0.0000200             | torch.Size([2, 6, 1])            |
| 1553    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.out_mul           | output              | torch.float32 |         | -0.1945554        | 11.6506519       | 0.0000000      | 1.0003252             | torch.Size([2, 6, 256])          |
| 1554    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.5.weight_quant      | input               | torch.float32 |         | 0.3230061         | 1.5668622        | 0.8971218      | 0.0266640             | torch.Size([256])                |
| 1554    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.5.weight_quant      | output              | torch.float32 |         | 0.3230061         | 1.5668622        | 0.8971218      | 0.0266640             | torch.Size([256])                |
| 1555    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.weight_mul        | input_0             | torch.float32 |         | -0.1945554        | 11.6506519       | 0.0000000      | 1.0003252             | torch.Size([2, 6, 256])          |
| 1555    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.weight_mul        | input_1             | torch.float32 |         | 0.3230061         | 1.5668622        | 0.8971218      | 0.0266640             | torch.Size([256])                |
| 1555    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.camera_encoder.5.weight_mul        | output              | torch.float32 |         | -0.3048415        | 9.7202406        | -0.0173343     | 0.6443523             | torch.Size([2, 6, 256])          |
| 1556    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.5.bias_quant        | input               | torch.float32 |         | -0.5803625        | 0.6603993        | 0.0418145      | 0.0299207             | torch.Size([256])                |
| 1556    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.camera_encoder.5.bias_quant        | output              | torch.float32 |         | -0.5803625        | 0.6603993        | 0.0418145      | 0.0299207             | torch.Size([256])                |
| 1557    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.5.bias_add          | input_0             | torch.float32 |         | -0.3048415        | 9.7202406        | -0.0173343     | 0.6443523             | torch.Size([2, 6, 256])          |
| 1557    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.5.bias_add          | input_1             | torch.float32 |         | -0.5803625        | 0.6603993        | 0.0418145      | 0.0299207             | torch.Size([256])                |
| 1557    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.camera_encoder.5.bias_add          | output              | torch.float32 |         | -0.8852041        | 9.5420856        | 0.0244802      | 0.6281091             | torch.Size([2, 6, 256])          |
| 1558    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -7.9138455        | 7.6449976        | 0.0456164      | 1.4399004             | torch.Size([2, 512, 256])        |
| 1558    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -7.9138455        | 7.6449976        | 0.0456164      | 1.4399004             | torch.Size([2, 512, 1, 256])     |
| 1559    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -0.8852041        | 9.5420856        | 0.0244802      | 0.6281091             | torch.Size([2, 6, 256])          |
| 1559    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -0.8852041        | 9.5420856        | 0.0244802      | 0.6281091             | torch.Size([2, 1, 6, 256])       |
| 1560    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.cam_add                            | input_0             | torch.float32 |         | -7.9138455        | 7.6449976        | 0.0456164      | 1.4399004             | torch.Size([2, 512, 1, 256])     |
| 1560    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.cam_add                            | input_1             | torch.float32 |         | -0.8852041        | 9.5420856        | 0.0244802      | 0.6281091             | torch.Size([2, 1, 6, 256])       |
| 1560    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.17.cam_add                            | output              | torch.float32 |         | -7.4826179        | 9.1853046        | 0.0700966      | 1.5766993             | torch.Size([2, 512, 6, 256])     |
| 1561    | torch.nn.modules.linear.Linear                                                    | head.layers.17.weights_fc                         | input               | torch.float32 |         | -7.4826179        | 9.1853046        | 0.0700966      | 1.5766993             | torch.Size([2, 512, 6, 256])     |
| 1561    | torch.nn.modules.linear.Linear                                                    | head.layers.17.weights_fc                         | weight              | torch.float32 |         | -0.3149840        | 0.2312223        | -0.0015179     | 0.0027572             | torch.Size([64, 256])            |
| 1561    | torch.nn.modules.linear.Linear                                                    | head.layers.17.weights_fc                         | bias                | torch.float32 |         | -0.0682593        | 0.0964835        | 0.0102252      | 0.0008483             | torch.Size([64])                 |
| 1561    | torch.nn.modules.linear.Linear                                                    | head.layers.17.weights_fc                         | output              | torch.float32 |         | -10.5827312       | 6.5753183        | 0.5541902      | 5.0662146             | torch.Size([2, 512, 6, 64])      |
| 1562    | torch.Tensor.reshape                                                              | head.layers.17                                    | input_0             | torch.float32 |         | -10.5827312       | 6.5753183        | 0.5541902      | 5.0662146             | torch.Size([2, 512, 6, 64])      |
| 1562    | torch.Tensor.reshape                                                              | head.layers.17                                    | output              | torch.float32 |         | -10.5827312       | 6.5753183        | 0.5541902      | 5.0662146             | torch.Size([2, 512, 48, 8])      |
| 1563    | torch.Tensor.max                                                                  | head.layers.17.weight_softmax                     | input               | torch.float32 |         | -10.5827312       | 6.5753183        | 0.5541902      | 5.0662146             | torch.Size([2, 512, 48, 8])      |
| 1563    | torch.Tensor.max                                                                  | head.layers.17.weight_softmax                     | output_0            | torch.float32 |         | 2.0941877         | 6.5753183        | 3.7255151      | 0.5441853             | torch.Size([2, 512, 1, 8])       |
| 1563    | torch.Tensor.max                                                                  | head.layers.17.weight_softmax                     | output_1            | torch.int64   |         | 0.0000000         | 47.0000000       | 32.4482422     | 144.2409973           | torch.Size([2, 512, 1, 8])       |
| 1564    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.weight_softmax.sub                 | input_0             | torch.float32 |         | -10.5827312       | 6.5753183        | 0.5541902      | 5.0662146             | torch.Size([2, 512, 48, 8])      |
| 1564    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.weight_softmax.sub                 | input_1             | torch.float32 |         | 2.0941877         | 6.5753183        | 3.7255151      | 0.5441853             | torch.Size([2, 512, 1, 8])       |
| 1564    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.17.weight_softmax.sub                 | output              | torch.float32 |         | -13.9372749       | 0.0000000        | -3.1713252     | 5.2025037             | torch.Size([2, 512, 48, 8])      |
| 1565    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.17.weight_softmax.exp                 | input               | torch.float32 |         | -13.9372749       | 0.0000000        | -3.1713252     | 5.2025037             | torch.Size([2, 512, 48, 8])      |
| 1565    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.17.weight_softmax.exp                 | output              | torch.float32 |         | 0.0000009         | 1.0000000        | 0.2166278      | 0.0961181             | torch.Size([2, 512, 48, 8])      |
| 1566    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.17.weight_softmax.sum                 | input               | torch.float32 |         | 0.0000009         | 1.0000000        | 0.2166278      | 0.0961181             | torch.Size([2, 512, 48, 8])      |
| 1566    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.17.weight_softmax.sum                 | output              | torch.float32 |         | 4.3329668         | 28.0687428       | 10.3981333     | 8.6844234             | torch.Size([2, 512, 1, 8])       |
| 1567    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.17.weight_softmax.reciprocal          | input               | torch.float32 |         | 4.3329668         | 28.0687428       | 10.3981333     | 8.6844234             | torch.Size([2, 512, 1, 8])       |
| 1567    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.17.weight_softmax.reciprocal          | output              | torch.float32 |         | 0.0356268         | 0.2307888        | 0.1045342      | 0.0010923             | torch.Size([2, 512, 1, 8])       |
| 1568    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.weight_softmax.mul                 | input_0             | torch.float32 |         | 0.0000009         | 1.0000000        | 0.2166278      | 0.0961181             | torch.Size([2, 512, 48, 8])      |
| 1568    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.weight_softmax.mul                 | input_1             | torch.float32 |         | 0.0356268         | 0.2307888        | 0.1045342      | 0.0010923             | torch.Size([2, 512, 1, 8])       |
| 1568    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.weight_softmax.mul                 | output              | torch.float32 |         | 0.0000001         | 0.2307888        | 0.0208333      | 0.0009864             | torch.Size([2, 512, 48, 8])      |
| 1569    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -55.7241516       | 57.0714684       | 0.6041360      | 289.8799744           | torch.Size([2, 512, 8, 3])       |
| 1569    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -52.4897270       | 52.3072014       | 1.0314738      | 312.6091919           | torch.Size([2, 512, 8, 1])       |
| 1570    | torch.ones_like                                                                   | head.layers.17                                    | input               | torch.float32 |         | -52.4897270       | 52.3072014       | 1.0314738      | 312.6091919           | torch.Size([2, 512, 8, 1])       |
| 1570    | torch.ones_like                                                                   | head.layers.17                                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1571    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.point_quant_stub                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1571    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.17.point_quant_stub                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1572    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.point_cat                          | input_0             | torch.float32 |         | -55.7241516       | 57.0714684       | 0.6041360      | 289.8799744           | torch.Size([2, 512, 8, 3])       |
| 1572    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.point_cat                          | input_1             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1572    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.point_cat                          | output              | torch.float32 |         | -55.7241516       | 57.0714684       | 0.7031019      | 217.4371643           | torch.Size([2, 512, 8, 4])       |
| 1573    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 1573    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1574    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -55.7241516       | 57.0714684       | 0.7031019      | 217.4371643           | torch.Size([2, 512, 8, 4])       |
| 1574    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -55.7241516       | 57.0714684       | 0.7031019      | 217.4371643           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1575    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.point_matmul                       | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1575    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.point_matmul                       | input_1             | torch.float32 |         | -55.7241516       | 57.0714684       | 0.7031019      | 217.4371643           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1575    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.point_matmul                       | output              | torch.float32 |         | -84.0590134       | 85.1784515       | 0.3071145      | 98.7361679            | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1576    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.17.point_sum                          | input               | torch.float32 |         | -84.0590134       | 85.1784515       | 0.3071145      | 98.7361679            | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1576    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.17.point_sum                          | output              | torch.float32 |         | -87.5456085       | 90.5733871       | 1.2284580      | 387.8705139           | torch.Size([2, 6, 512, 8, 4])    |
| 1577    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -87.5456085       | 90.5733871       | 1.2284580      | 387.8705139           | torch.Size([2, 6, 512, 8, 4])    |
| 1577    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -57.9748688       | 56.5619926       | -0.5344648     | 428.8114929           | torch.Size([2, 6, 512, 8, 1])    |
| 1578    | torch.clamp                                                                       | head.layers.17                                    | input               | torch.float32 |         | -57.9748688       | 56.5619926       | -0.5344648     | 428.8114929           | torch.Size([2, 6, 512, 8, 1])    |
| 1578    | torch.clamp                                                                       | head.layers.17                                    | output              | torch.float32 |         | 0.0000100         | 56.5619926       | 7.3513718      | 153.2630768           | torch.Size([2, 6, 512, 8, 1])    |
| 1579    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.17.reciprocal_op                      | input               | torch.float32 |         | 0.0000100         | 56.5619926       | 7.3513718      | 153.2630768           | torch.Size([2, 6, 512, 8, 1])    |
| 1579    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.17.reciprocal_op                      | output              | torch.float32 |         | 0.0176797         | 100000.0000000   | 54600.3750000  | 2478850816.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 1580    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | -87.5456085       | 90.5733871       | 1.2284580      | 387.8705139           | torch.Size([2, 6, 512, 8, 4])    |
| 1580    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | -87.5456085       | 90.5733871       | 2.2241485      | 558.7699585           | torch.Size([2, 6, 512, 8, 2])    |
| 1581    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.point_mul                          | input_0             | torch.float32 |         | -87.5456085       | 90.5733871       | 2.2241485      | 558.7699585           | torch.Size([2, 6, 512, 8, 2])    |
| 1581    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.point_mul                          | input_1             | torch.float32 |         | 0.0176797         | 100000.0000000   | 54600.3750000  | 2478850816.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 1581    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.point_mul                          | output              | torch.float32 |         | -8749490.0000000  | 9057339.0000000  | 256072.4687500 | 2936244797440.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 1582    | torch.Tensor.flatten                                                              | head.layers.17                                    | input               | torch.float32 |         | -8749490.0000000  | 9057339.0000000  | 256072.4687500 | 2936244797440.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 1582    | torch.Tensor.flatten                                                              | head.layers.17                                    | output              | torch.float32 |         | -8749490.0000000  | 9057339.0000000  | 256072.4687500 | 2936244797440.0000000 | torch.Size([12, 512, 8, 2])      |
| 1583    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.17                                    | input_0             | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 1583    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.17                                    | input_1             | torch.float32 |         | -8749490.0000000  | 9057339.0000000  | 256072.4687500 | 2936244797440.0000000 | torch.Size([12, 512, 8, 2])      |
| 1583    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.17                                    | output              | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([12, 256, 512, 8])    |
| 1584    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.feat_cat                           | input               | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([12, 256, 512, 8])    |
| 1584    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.feat_cat                           | output              | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([12, 256, 512, 8])    |
| 1585    | torch.Tensor.view                                                                 | head.layers.17                                    | input_0             | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([12, 256, 512, 8])    |
| 1585    | torch.Tensor.view                                                                 | head.layers.17                                    | output              | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 6, 256, 512, 8])  |
| 1586    | torch.Tensor.permute                                                              | head.layers.17                                    | input_0             | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 6, 256, 512, 8])  |
| 1586    | torch.Tensor.permute                                                              | head.layers.17                                    | output              | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 6, 8, 256])  |
| 1587    | torch.Tensor.contiguous                                                           | head.layers.17                                    | input               | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 6, 8, 256])  |
| 1587    | torch.Tensor.contiguous                                                           | head.layers.17                                    | output              | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 6, 8, 256])  |
| 1588    | torch.Tensor.view                                                                 | head.layers.17                                    | input_0             | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 6, 8, 256])  |
| 1588    | torch.Tensor.view                                                                 | head.layers.17                                    | output              | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 48, 256])    |
| 1589    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | input_0             | torch.float32 |         | 0.0000001         | 0.2307888        | 0.0208333      | 0.0009864             | torch.Size([2, 512, 48, 8])      |
| 1589    | torch.Tensor.__getitem__                                                          | head.layers.17                                    | output              | torch.float32 |         | 0.0000001         | 0.2307888        | 0.0208333      | 0.0009864             | torch.Size([2, 512, 48, 8, 1])   |
| 1590    | torch.Tensor.reshape                                                              | head.layers.17                                    | input_0             | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 48, 256])    |
| 1590    | torch.Tensor.reshape                                                              | head.layers.17                                    | output              | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 48, 8, 32])  |
| 1591    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.feat_mul                           | input_0             | torch.float32 |         | 0.0000001         | 0.2307888        | 0.0208333      | 0.0009864             | torch.Size([2, 512, 48, 8, 1])   |
| 1591    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.feat_mul                           | input_1             | torch.float32 |         | -43.9785194       | 30.4140949       | 0.0305890      | 2.9869204             | torch.Size([2, 512, 48, 8, 32])  |
| 1591    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.17.feat_mul                           | output              | torch.float32 |         | -3.9522188        | 3.0801256        | 0.0006050      | 0.0039665             | torch.Size([2, 512, 48, 8, 32])  |
| 1592    | torch.Tensor.view                                                                 | head.layers.17                                    | input_0             | torch.float32 |         | -3.9522188        | 3.0801256        | 0.0006050      | 0.0039665             | torch.Size([2, 512, 48, 8, 32])  |
| 1592    | torch.Tensor.view                                                                 | head.layers.17                                    | output              | torch.float32 |         | -3.9522188        | 3.0801256        | 0.0006050      | 0.0039665             | torch.Size([2, 512, 48, 256])    |
| 1593    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.17.feat_sum                           | input               | torch.float32 |         | -3.9522188        | 3.0801256        | 0.0006050      | 0.0039665             | torch.Size([2, 512, 48, 256])    |
| 1593    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.17.feat_sum                           | output              | torch.float32 |         | -6.3935013        | 5.1537061        | 0.0290408      | 0.4591666             | torch.Size([2, 512, 256])        |
| 1594    | torch.nn.modules.linear.Linear                                                    | head.layers.17.output_proj                        | input               | torch.float32 |         | -6.3935013        | 5.1537061        | 0.0290408      | 0.4591666             | torch.Size([2, 512, 256])        |
| 1594    | torch.nn.modules.linear.Linear                                                    | head.layers.17.output_proj                        | weight              | torch.float32 |         | -0.2891404        | 0.3089988        | -0.0003690     | 0.0059508             | torch.Size([256, 256])           |
| 1594    | torch.nn.modules.linear.Linear                                                    | head.layers.17.output_proj                        | bias                | torch.float32 |         | -0.1011890        | 0.0951982        | -0.0002823     | 0.0014432             | torch.Size([256])                |
| 1594    | torch.nn.modules.linear.Linear                                                    | head.layers.17.output_proj                        | output              | torch.float32 |         | -8.3369684        | 7.3097701        | 0.0395680      | 0.8038763             | torch.Size([2, 512, 256])        |
| 1595    | torch.nn.modules.dropout.Dropout                                                  | head.layers.17.proj_drop                          | input               | torch.float32 |         | -8.3369684        | 7.3097701        | 0.0395680      | 0.8038763             | torch.Size([2, 512, 256])        |
| 1595    | torch.nn.modules.dropout.Dropout                                                  | head.layers.17.proj_drop                          | output              | torch.float32 |         | -8.3369684        | 7.3097701        | 0.0395680      | 0.8038763             | torch.Size([2, 512, 256])        |
| 1596    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.residual_op                        | input_0             | torch.float32 |         | -8.3369684        | 7.3097701        | 0.0395680      | 0.8038763             | torch.Size([2, 512, 256])        |
| 1596    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.residual_op                        | input_1             | torch.float32 |         | -7.5335484        | 4.7197452        | -0.0072047     | 0.6925276             | torch.Size([2, 512, 256])        |
| 1596    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.17.residual_op                        | output              | torch.float32 |         | -8.3369684        | 7.3097701        | 0.0161816      | 0.7487475             | torch.Size([2, 512, 512])        |
| 1597    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.input_mean.mean           | input_0             | torch.float32 |         | -8.3369684        | 7.3097701        | 0.0161816      | 0.7487475             | torch.Size([2, 512, 512])        |
| 1597    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.input_mean.mean           | output              | torch.float32 |         | -0.0448625        | 0.1018691        | 0.0161816      | 0.0003726             | torch.Size([2, 512, 1])          |
| 1598    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.18.pre_norm.sub                       | input_0             | torch.float32 |         | -8.3369684        | 7.3097701        | 0.0161816      | 0.7487475             | torch.Size([2, 512, 512])        |
| 1598    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.18.pre_norm.sub                       | input_1             | torch.float32 |         | -0.0448625        | 0.1018691        | 0.0161816      | 0.0003726             | torch.Size([2, 512, 1])          |
| 1598    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.18.pre_norm.sub                       | output              | torch.float32 |         | -8.3716879        | 7.2711630        | 0.0000000      | 0.7483752             | torch.Size([2, 512, 512])        |
| 1599    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.mul                       | input_0             | torch.float32 |         | -8.3716879        | 7.2711630        | 0.0000000      | 0.7483752             | torch.Size([2, 512, 512])        |
| 1599    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.mul                       | input_1             | torch.float32 |         | -8.3716879        | 7.2711630        | 0.0000000      | 0.7483752             | torch.Size([2, 512, 512])        |
| 1599    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.mul                       | output              | torch.float32 |         | 0.0000000         | 70.0851593       | 0.7483737      | 7.9118958             | torch.Size([2, 512, 512])        |
| 1600    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 70.0851593       | 0.7483737      | 7.9118958             | torch.Size([2, 512, 512])        |
| 1600    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.18.pre_norm.var_mean.mean             | output              | torch.float32 |         | 0.3567070         | 3.3788762        | 0.7483737      | 0.1304946             | torch.Size([2, 512, 1])          |
| 1601    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.18.pre_norm.rsqrt                     | input               | torch.float32 |         | 0.3567070         | 3.3788762        | 0.7483737      | 0.1304946             | torch.Size([2, 512, 1])          |
| 1601    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.18.pre_norm.rsqrt                     | output              | torch.float32 |         | 0.5440179         | 1.6743184        | 1.2401414      | 0.0679369             | torch.Size([2, 512, 1])          |
| 1602    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.out_mul                   | input_0             | torch.float32 |         | -8.3716879        | 7.2711630        | 0.0000000      | 0.7483752             | torch.Size([2, 512, 512])        |
| 1602    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.out_mul                   | input_1             | torch.float32 |         | 0.5440179         | 1.6743184        | 1.2401414      | 0.0679369             | torch.Size([2, 512, 1])          |
| 1602    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.out_mul                   | output              | torch.float32 |         | -10.2217655       | 7.5467024        | 0.0000000      | 0.9999858             | torch.Size([2, 512, 512])        |
| 1603    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.18.pre_norm.weight_quant              | input               | torch.float32 |         | 0.6495609         | 1.5811656        | 1.0579998      | 0.0720950             | torch.Size([512])                |
| 1603    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.18.pre_norm.weight_quant              | output              | torch.float32 |         | 0.6495609         | 1.5811656        | 1.0579998      | 0.0720950             | torch.Size([512])                |
| 1604    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.weight_mul                | input_0             | torch.float32 |         | -10.2217655       | 7.5467024        | 0.0000000      | 0.9999858             | torch.Size([2, 512, 512])        |
| 1604    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.weight_mul                | input_1             | torch.float32 |         | 0.6495609         | 1.5811656        | 1.0579998      | 0.0720950             | torch.Size([512])                |
| 1604    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.18.pre_norm.weight_mul                | output              | torch.float32 |         | -7.7994318        | 5.7828069        | 0.0026302      | 0.8498229             | torch.Size([2, 512, 512])        |
| 1605    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.18.pre_norm.bias_quant                | input               | torch.float32 |         | -0.2217483        | 0.2109743        | 0.0011747      | 0.0023772             | torch.Size([512])                |
| 1605    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.18.pre_norm.bias_quant                | output              | torch.float32 |         | -0.2217483        | 0.2109743        | 0.0011747      | 0.0023772             | torch.Size([512])                |
| 1606    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.18.pre_norm.bias_add                  | input_0             | torch.float32 |         | -7.7994318        | 5.7828069        | 0.0026302      | 0.8498229             | torch.Size([2, 512, 512])        |
| 1606    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.18.pre_norm.bias_add                  | input_1             | torch.float32 |         | -0.2217483        | 0.2109743        | 0.0011747      | 0.0023772             | torch.Size([512])                |
| 1606    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.18.pre_norm.bias_add                  | output              | torch.float32 |         | -7.6170335        | 5.6594391        | 0.0038050      | 0.8355025             | torch.Size([2, 512, 512])        |
| 1607    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.0.0                         | input               | torch.float32 |         | -7.6170335        | 5.6594391        | 0.0038050      | 0.8355025             | torch.Size([2, 512, 512])        |
| 1607    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.0.0                         | weight              | torch.float32 |         | -0.4454298        | 0.5020626        | -0.0008407     | 0.0058560             | torch.Size([1024, 512])          |
| 1607    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.0.0                         | bias                | torch.float32 |         | -0.1510170        | 0.0629522        | -0.0535287     | 0.0011214             | torch.Size([1024])               |
| 1607    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.0.0                         | output              | torch.float32 |         | -19.9217529       | 14.6932411       | -3.1231413     | 11.3202925            | torch.Size([2, 512, 1024])       |
| 1608    | torch.nn.modules.activation.ReLU                                                  | head.layers.18.activate                           | input               | torch.float32 |         | 0.0000000         | 14.6932411       | 0.3251952      | 1.4214419             | torch.Size([2, 512, 1024])       |
| 1608    | torch.nn.modules.activation.ReLU                                                  | head.layers.18.activate                           | output              | torch.float32 |         | 0.0000000         | 14.6932411       | 0.3251952      | 1.4214419             | torch.Size([2, 512, 1024])       |
| 1609    | torch.nn.modules.dropout.Dropout                                                  | head.layers.18.layers.0.2                         | input               | torch.float32 |         | 0.0000000         | 14.6932411       | 0.3251952      | 1.4214419             | torch.Size([2, 512, 1024])       |
| 1609    | torch.nn.modules.dropout.Dropout                                                  | head.layers.18.layers.0.2                         | output              | torch.float32 |         | 0.0000000         | 14.6932411       | 0.3251952      | 1.4214419             | torch.Size([2, 512, 1024])       |
| 1610    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.1                           | input               | torch.float32 |         | 0.0000000         | 14.6932411       | 0.3251952      | 1.4214419             | torch.Size([2, 512, 1024])       |
| 1610    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.1                           | weight              | torch.float32 |         | -0.3873430        | 0.3617197        | 0.0000918      | 0.0056267             | torch.Size([256, 1024])          |
| 1610    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.1                           | bias                | torch.float32 |         | -0.0861191        | 0.0774464        | -0.0007529     | 0.0010433             | torch.Size([256])                |
| 1610    | torch.nn.modules.linear.Linear                                                    | head.layers.18.layers.1                           | output              | torch.float32 |         | -36.0624390       | 45.5157852       | 0.0663484      | 31.9320564            | torch.Size([2, 512, 256])        |
| 1611    | torch.nn.modules.dropout.Dropout                                                  | head.layers.18.layers.2                           | input               | torch.float32 |         | -36.0624390       | 45.5157852       | 0.0663484      | 31.9320564            | torch.Size([2, 512, 256])        |
| 1611    | torch.nn.modules.dropout.Dropout                                                  | head.layers.18.layers.2                           | output              | torch.float32 |         | -36.0624390       | 45.5157852       | 0.0663484      | 31.9320564            | torch.Size([2, 512, 256])        |
| 1612    | torch.nn.modules.linear.Linear                                                    | head.layers.18.identity_fc                        | input               | torch.float32 |         | -7.6170335        | 5.6594391        | 0.0038050      | 0.8355025             | torch.Size([2, 512, 512])        |
| 1612    | torch.nn.modules.linear.Linear                                                    | head.layers.18.identity_fc                        | weight              | torch.float32 |         | -0.3842853        | 0.4044652        | -0.0002469     | 0.0070671             | torch.Size([256, 512])           |
| 1612    | torch.nn.modules.linear.Linear                                                    | head.layers.18.identity_fc                        | bias                | torch.float32 |         | -0.0906205        | 0.0750783        | -0.0010049     | 0.0010887             | torch.Size([256])                |
| 1612    | torch.nn.modules.linear.Linear                                                    | head.layers.18.identity_fc                        | output              | torch.float32 |         | -14.6830730       | 13.6347122       | 0.0109869      | 12.2686968            | torch.Size([2, 512, 256])        |
| 1613    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.18.short_add                          | input_0             | torch.float32 |         | -14.6830730       | 13.6347122       | 0.0109869      | 12.2686968            | torch.Size([2, 512, 256])        |
| 1613    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.18.short_add                          | input_1             | torch.float32 |         | -36.0624390       | 45.5157852       | 0.0663484      | 31.9320564            | torch.Size([2, 512, 256])        |
| 1613    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.18.short_add                          | output              | torch.float32 |         | -35.8926811       | 52.8628922       | 0.0773353      | 53.7628403            | torch.Size([2, 512, 256])        |
| 1614    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.19.input_mean.mean                    | input_0             | torch.float32 |         | -35.8926811       | 52.8628922       | 0.0773353      | 53.7628403            | torch.Size([2, 512, 256])        |
| 1614    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.19.input_mean.mean                    | output              | torch.float32 |         | -0.2294865        | 0.4431200        | 0.0773353      | 0.0345308             | torch.Size([2, 512, 1])          |
| 1615    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.19.sub                                | input_0             | torch.float32 |         | -35.8926811       | 52.8628922       | 0.0773353      | 53.7628403            | torch.Size([2, 512, 256])        |
| 1615    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.19.sub                                | input_1             | torch.float32 |         | -0.2294865        | 0.4431200        | 0.0773353      | 0.0345308             | torch.Size([2, 512, 1])          |
| 1615    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.19.sub                                | output              | torch.float32 |         | -36.3358002       | 52.4197731       | 0.0000001      | 53.7283478            | torch.Size([2, 512, 256])        |
| 1616    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.mul                                | input_0             | torch.float32 |         | -36.3358002       | 52.4197731       | 0.0000001      | 53.7283478            | torch.Size([2, 512, 256])        |
| 1616    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.mul                                | input_1             | torch.float32 |         | -36.3358002       | 52.4197731       | 0.0000001      | 53.7283478            | torch.Size([2, 512, 256])        |
| 1616    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.mul                                | output              | torch.float32 |         | 0.0000000         | 2747.8325195     | 53.7281418     | 20044.8437500         | torch.Size([2, 512, 256])        |
| 1617    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.19.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 2747.8325195     | 53.7281418     | 20044.8437500         | torch.Size([2, 512, 256])        |
| 1617    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.19.var_mean.mean                      | output              | torch.float32 |         | 6.2346158         | 201.3002930      | 53.7281456     | 4872.7622070          | torch.Size([2, 512, 1])          |
| 1618    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.19.rsqrt                              | input               | torch.float32 |         | 6.2346158         | 201.3002930      | 53.7281456     | 4872.7622070          | torch.Size([2, 512, 1])          |
| 1618    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.19.rsqrt                              | output              | torch.float32 |         | 0.0704819         | 0.4004929        | 0.2300382      | 0.0098710             | torch.Size([2, 512, 1])          |
| 1619    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.out_mul                            | input_0             | torch.float32 |         | -36.3358002       | 52.4197731       | 0.0000001      | 53.7283478            | torch.Size([2, 512, 256])        |
| 1619    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.out_mul                            | input_1             | torch.float32 |         | 0.0704819         | 0.4004929        | 0.2300382      | 0.0098710             | torch.Size([2, 512, 1])          |
| 1619    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.out_mul                            | output              | torch.float32 |         | -4.9155855        | 4.1378202        | 0.0000000      | 1.0000032             | torch.Size([2, 512, 256])        |
| 1620    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.19.weight_quant                       | input               | torch.float32 |         | 0.6796300         | 1.0328771        | 0.8834044      | 0.0047104             | torch.Size([256])                |
| 1620    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.19.weight_quant                       | output              | torch.float32 |         | 0.6796300         | 1.0328771        | 0.8834044      | 0.0047104             | torch.Size([256])                |
| 1621    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.weight_mul                         | input_0             | torch.float32 |         | -4.9155855        | 4.1378202        | 0.0000000      | 1.0000032             | torch.Size([2, 512, 256])        |
| 1621    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.weight_mul                         | input_1             | torch.float32 |         | 0.6796300         | 1.0328771        | 0.8834044      | 0.0047104             | torch.Size([256])                |
| 1621    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.19.weight_mul                         | output              | torch.float32 |         | -4.3445778        | 3.6446855        | 0.0014632      | 0.7831325             | torch.Size([2, 512, 256])        |
| 1622    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.19.bias_quant                         | input               | torch.float32 |         | -0.0769484        | 0.1481542        | 0.0026473      | 0.0013678             | torch.Size([256])                |
| 1622    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.19.bias_quant                         | output              | torch.float32 |         | -0.0769484        | 0.1481542        | 0.0026473      | 0.0013678             | torch.Size([256])                |
| 1623    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.19.bias_add                           | input_0             | torch.float32 |         | -4.3445778        | 3.6446855        | 0.0014632      | 0.7831325             | torch.Size([2, 512, 256])        |
| 1623    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.19.bias_add                           | input_1             | torch.float32 |         | -0.0769484        | 0.1481542        | 0.0026473      | 0.0013678             | torch.Size([256])                |
| 1623    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.19.bias_add                           | output              | torch.float32 |         | -4.3537626        | 3.6137092        | 0.0041105      | 0.7674054             | torch.Size([2, 512, 256])        |
| 1624    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.add1                               | input_0             | torch.float32 |         | -4.3537626        | 3.6137092        | 0.0041105      | 0.7674054             | torch.Size([2, 512, 256])        |
| 1624    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.add1                               | input_1             | torch.float32 |         | -1.7289910        | 7.5853052        | 0.0528211      | 0.8510517             | torch.Size([2, 512, 256])        |
| 1624    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.add1                               | output              | torch.float32 |         | -4.1337934        | 8.4953527        | 0.0569316      | 1.3018470             | torch.Size([2, 512, 256])        |
| 1625    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.0                           | input               | torch.float32 |         | -4.1337934        | 8.4953527        | 0.0569316      | 1.3018470             | torch.Size([2, 512, 256])        |
| 1625    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.0                           | weight              | torch.float32 |         | -0.5312872        | 0.8384986        | 0.0000412      | 0.0048373             | torch.Size([256, 256])           |
| 1625    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.0                           | bias                | torch.float32 |         | -0.1474053        | 0.0710347        | -0.0397527     | 0.0019485             | torch.Size([256])                |
| 1625    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.0                           | output              | torch.float32 |         | -10.2037258       | 8.8689871        | -0.8626234     | 4.3895178             | torch.Size([2, 512, 256])        |
| 1626    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.1                           | input               | torch.float32 |         | 0.0000000         | 8.8689871        | 0.4733208      | 0.8270773             | torch.Size([2, 512, 256])        |
| 1626    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.1                           | output              | torch.float32 |         | 0.0000000         | 8.8689871        | 0.4733208      | 0.8270773             | torch.Size([2, 512, 256])        |
| 1627    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.2                           | input               | torch.float32 |         | 0.0000000         | 8.8689871        | 0.4733208      | 0.8270773             | torch.Size([2, 512, 256])        |
| 1627    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.2                           | weight              | torch.float32 |         | -0.5925879        | 0.3864230        | -0.0059677     | 0.0050629             | torch.Size([256, 256])           |
| 1627    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.2                           | bias                | torch.float32 |         | -0.1329685        | 0.1114794        | -0.0053145     | 0.0022305             | torch.Size([256])                |
| 1627    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.2                           | output              | torch.float32 |         | -11.1233845       | 6.8557453        | -0.7014695     | 3.4229922             | torch.Size([2, 512, 256])        |
| 1628    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.3                           | input               | torch.float32 |         | 0.0000000         | 6.8557453        | 0.4066659      | 0.5425352             | torch.Size([2, 512, 256])        |
| 1628    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.3                           | output              | torch.float32 |         | 0.0000000         | 6.8557453        | 0.4066659      | 0.5425352             | torch.Size([2, 512, 256])        |
| 1629    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 6.8557453        | 0.4066659      | 0.5425352             | torch.Size([2, 512, 256])        |
| 1629    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.input_mean.mean           | output              | torch.float32 |         | 0.2007230         | 0.7461588        | 0.4066659      | 0.0100990             | torch.Size([2, 512, 1])          |
| 1630    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.20.layers.4.sub                       | input_0             | torch.float32 |         | 0.0000000         | 6.8557453        | 0.4066659      | 0.5425352             | torch.Size([2, 512, 256])        |
| 1630    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.20.layers.4.sub                       | input_1             | torch.float32 |         | 0.2007230         | 0.7461588        | 0.4066659      | 0.0100990             | torch.Size([2, 512, 1])          |
| 1630    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.20.layers.4.sub                       | output              | torch.float32 |         | -0.7461588        | 6.2357774        | 0.0000000      | 0.5324461             | torch.Size([2, 512, 256])        |
| 1631    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.mul                       | input_0             | torch.float32 |         | -0.7461588        | 6.2357774        | 0.0000000      | 0.5324461             | torch.Size([2, 512, 256])        |
| 1631    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.mul                       | input_1             | torch.float32 |         | -0.7461588        | 6.2357774        | 0.0000000      | 0.5324461             | torch.Size([2, 512, 256])        |
| 1631    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.mul                       | output              | torch.float32 |         | 0.0000000         | 38.8849182       | 0.5324441      | 1.8616247             | torch.Size([2, 512, 256])        |
| 1632    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 38.8849182       | 0.5324441      | 1.8616247             | torch.Size([2, 512, 256])        |
| 1632    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.4.var_mean.mean             | output              | torch.float32 |         | 0.1309915         | 1.5920768        | 0.5324441      | 0.0475223             | torch.Size([2, 512, 1])          |
| 1633    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.20.layers.4.rsqrt                     | input               | torch.float32 |         | 0.1309915         | 1.5920768        | 0.5324441      | 0.0475223             | torch.Size([2, 512, 1])          |
| 1633    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.20.layers.4.rsqrt                     | output              | torch.float32 |         | 0.7925317         | 2.7628784        | 1.4498107      | 0.0780223             | torch.Size([2, 512, 1])          |
| 1634    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.out_mul                   | input_0             | torch.float32 |         | -0.7461588        | 6.2357774        | 0.0000000      | 0.5324461             | torch.Size([2, 512, 256])        |
| 1634    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.out_mul                   | input_1             | torch.float32 |         | 0.7925317         | 2.7628784        | 1.4498107      | 0.0780223             | torch.Size([2, 512, 1])          |
| 1634    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.out_mul                   | output              | torch.float32 |         | -0.6959989        | 6.4649482        | 0.0000000      | 0.9999820             | torch.Size([2, 512, 256])        |
| 1635    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.4.weight_quant              | input               | torch.float32 |         | 0.7434729         | 1.2185259        | 0.9715712      | 0.0058709             | torch.Size([256])                |
| 1635    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.4.weight_quant              | output              | torch.float32 |         | 0.7434729         | 1.2185259        | 0.9715712      | 0.0058709             | torch.Size([256])                |
| 1636    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.weight_mul                | input_0             | torch.float32 |         | -0.6959989        | 6.4649482        | 0.0000000      | 0.9999820             | torch.Size([2, 512, 256])        |
| 1636    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.weight_mul                | input_1             | torch.float32 |         | 0.7434729         | 1.2185259        | 0.9715712      | 0.0058709             | torch.Size([256])                |
| 1636    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.4.weight_mul                | output              | torch.float32 |         | -0.8480927        | 6.6076417        | 0.0057906      | 0.9662712             | torch.Size([2, 512, 256])        |
| 1637    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.4.bias_quant                | input               | torch.float32 |         | -0.0757226        | 0.2495108        | 0.0394512      | 0.0048348             | torch.Size([256])                |
| 1637    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.4.bias_quant                | output              | torch.float32 |         | -0.0757226        | 0.2495108        | 0.0394512      | 0.0048348             | torch.Size([256])                |
| 1638    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.layers.4.bias_add                  | input_0             | torch.float32 |         | -0.8480927        | 6.6076417        | 0.0057906      | 0.9662712             | torch.Size([2, 512, 256])        |
| 1638    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.layers.4.bias_add                  | input_1             | torch.float32 |         | -0.0757226        | 0.2495108        | 0.0394512      | 0.0048348             | torch.Size([256])                |
| 1638    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.layers.4.bias_add                  | output              | torch.float32 |         | -0.8537493        | 6.5420494        | 0.0452418      | 0.9254606             | torch.Size([2, 512, 256])        |
| 1639    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.5                           | input               | torch.float32 |         | -0.8537493        | 6.5420494        | 0.0452418      | 0.9254606             | torch.Size([2, 512, 256])        |
| 1639    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.5                           | weight              | torch.float32 |         | -0.3297310        | 0.4340349        | 0.0047341      | 0.0039670             | torch.Size([256, 256])           |
| 1639    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.5                           | bias                | torch.float32 |         | -0.1393721        | 0.0863483        | -0.0307643     | 0.0023375             | torch.Size([256])                |
| 1639    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.5                           | output              | torch.float32 |         | -7.4880424        | 10.0457850       | -1.1053919     | 4.5683832             | torch.Size([2, 512, 256])        |
| 1640    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.6                           | input               | torch.float32 |         | 0.0000000         | 10.0457850       | 0.4648500      | 1.0477958             | torch.Size([2, 512, 256])        |
| 1640    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.6                           | output              | torch.float32 |         | 0.0000000         | 10.0457850       | 0.4648500      | 1.0477958             | torch.Size([2, 512, 256])        |
| 1641    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.7                           | input               | torch.float32 |         | 0.0000000         | 10.0457850       | 0.4648500      | 1.0477958             | torch.Size([2, 512, 256])        |
| 1641    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.7                           | weight              | torch.float32 |         | -0.3382548        | 0.4402925        | -0.0069504     | 0.0026877             | torch.Size([256, 256])           |
| 1641    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.7                           | bias                | torch.float32 |         | -0.0995187        | 0.1937151        | -0.0185211     | 0.0016516             | torch.Size([256])                |
| 1641    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.7                           | output              | torch.float32 |         | -10.4024487       | 40.8670425       | -1.8950262     | 6.4541197             | torch.Size([2, 512, 256])        |
| 1642    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.8                           | input               | torch.float32 |         | 0.0000000         | 40.8670425       | 0.2842541      | 2.6055329             | torch.Size([2, 512, 256])        |
| 1642    | torch.nn.modules.activation.ReLU                                                  | head.layers.20.layers.8                           | output              | torch.float32 |         | 0.0000000         | 40.8670425       | 0.2842541      | 2.6055329             | torch.Size([2, 512, 256])        |
| 1643    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 40.8670425       | 0.2842541      | 2.6055329             | torch.Size([2, 512, 256])        |
| 1643    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.input_mean.mean           | output              | torch.float32 |         | 0.1400327         | 0.6155337        | 0.2842541      | 0.0070759             | torch.Size([2, 512, 1])          |
| 1644    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.20.layers.9.sub                       | input_0             | torch.float32 |         | 0.0000000         | 40.8670425       | 0.2842541      | 2.6055329             | torch.Size([2, 512, 256])        |
| 1644    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.20.layers.9.sub                       | input_1             | torch.float32 |         | 0.1400327         | 0.6155337        | 0.2842541      | 0.0070759             | torch.Size([2, 512, 1])          |
| 1644    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.20.layers.9.sub                       | output              | torch.float32 |         | -0.6155337        | 40.5743599       | -0.0000000     | 2.5984640             | torch.Size([2, 512, 256])        |
| 1645    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.mul                       | input_0             | torch.float32 |         | -0.6155337        | 40.5743599       | -0.0000000     | 2.5984640             | torch.Size([2, 512, 256])        |
| 1645    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.mul                       | input_1             | torch.float32 |         | -0.6155337        | 40.5743599       | -0.0000000     | 2.5984640             | torch.Size([2, 512, 256])        |
| 1645    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.mul                       | output              | torch.float32 |         | 0.0000000         | 1646.2786865     | 2.5984542      | 1613.8322754          | torch.Size([2, 512, 256])        |
| 1646    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 1646.2786865     | 2.5984542      | 1613.8322754          | torch.Size([2, 512, 256])        |
| 1646    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.20.layers.9.var_mean.mean             | output              | torch.float32 |         | 0.3278451         | 6.8538666        | 2.5984540      | 2.0961220             | torch.Size([2, 512, 1])          |
| 1647    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.20.layers.9.rsqrt                     | input               | torch.float32 |         | 0.3278451         | 6.8538666        | 2.5984540      | 2.0961220             | torch.Size([2, 512, 1])          |
| 1647    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.20.layers.9.rsqrt                     | output              | torch.float32 |         | 0.3819723         | 1.7464615        | 0.7042427      | 0.0471703             | torch.Size([2, 512, 1])          |
| 1648    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.out_mul                   | input_0             | torch.float32 |         | -0.6155337        | 40.5743599       | -0.0000000     | 2.5984640             | torch.Size([2, 512, 256])        |
| 1648    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.out_mul                   | input_1             | torch.float32 |         | 0.3819723         | 1.7464615        | 0.7042427      | 0.0471703             | torch.Size([2, 512, 1])          |
| 1648    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.out_mul                   | output              | torch.float32 |         | -0.4885379        | 15.6990232       | -0.0000000     | 0.9999983             | torch.Size([2, 512, 256])        |
| 1649    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.9.weight_quant              | input               | torch.float32 |         | 0.7900761         | 1.3101054        | 0.9095095      | 0.0016009             | torch.Size([256])                |
| 1649    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.9.weight_quant              | output              | torch.float32 |         | 0.7900761         | 1.3101054        | 0.9095095      | 0.0016009             | torch.Size([256])                |
| 1650    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.weight_mul                | input_0             | torch.float32 |         | -0.4885379        | 15.6990232       | -0.0000000     | 0.9999983             | torch.Size([2, 512, 256])        |
| 1650    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.weight_mul                | input_1             | torch.float32 |         | 0.7900761         | 1.3101054        | 0.9095095      | 0.0016009             | torch.Size([256])                |
| 1650    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.9.weight_mul                | output              | torch.float32 |         | -0.6400362        | 16.4289742       | -0.0022795     | 0.7025239             | torch.Size([2, 512, 256])        |
| 1651    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.9.bias_quant                | input               | torch.float32 |         | -0.1930256        | 0.0890824        | 0.0560105      | 0.0017839             | torch.Size([256])                |
| 1651    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.9.bias_quant                | output              | torch.float32 |         | -0.1930256        | 0.0890824        | 0.0560105      | 0.0017839             | torch.Size([256])                |
| 1652    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.layers.9.bias_add                  | input_0             | torch.float32 |         | -0.6400362        | 16.4289742       | -0.0022795     | 0.7025239             | torch.Size([2, 512, 256])        |
| 1652    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.layers.9.bias_add                  | input_1             | torch.float32 |         | -0.1930256        | 0.0890824        | 0.0560105      | 0.0017839             | torch.Size([256])                |
| 1652    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.layers.9.bias_add                  | output              | torch.float32 |         | -0.5790077        | 16.5108013       | 0.0537311      | 0.6670012             | torch.Size([2, 512, 256])        |
| 1653    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.10                          | input               | torch.float32 |         | -0.5790077        | 16.5108013       | 0.0537311      | 0.6670012             | torch.Size([2, 512, 256])        |
| 1653    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.10                          | weight              | torch.float32 |         | -0.4008031        | 0.6920518        | 0.0014364      | 0.0019408             | torch.Size([11, 256])            |
| 1653    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.10                          | bias                | torch.float32 |         | -0.0506025        | 0.0276851        | -0.0170499     | 0.0005790             | torch.Size([11])                 |
| 1653    | torch.nn.modules.linear.Linear                                                    | head.layers.20.layers.10                          | output              | torch.float32 |         | -6.0832868        | 15.1447506       | -0.0168944     | 0.8326243             | torch.Size([2, 512, 11])         |
| 1654    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.11.scale_quant_stub         | input               | torch.float32 |         | 0.0593412         | 0.6670731        | 0.3126911      | 0.0488246             | torch.Size([11])                 |
| 1654    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.20.layers.11.scale_quant_stub         | output              | torch.float32 |         | 0.0593412         | 0.6670731        | 0.3126911      | 0.0488246             | torch.Size([11])                 |
| 1655    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.11.mul                      | input_0             | torch.float32 |         | -6.0832868        | 15.1447506       | -0.0168944     | 0.8326243             | torch.Size([2, 512, 11])         |
| 1655    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.11.mul                      | input_1             | torch.float32 |         | 0.0593412         | 0.6670731        | 0.3126911      | 0.0488246             | torch.Size([11])                 |
| 1655    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.20.layers.11.mul                      | output              | torch.float32 |         | -3.9398444        | 4.2808061        | -0.0049353     | 0.1928912             | torch.Size([2, 512, 11])         |
| 1656    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.add2                               | input_0             | torch.float32 |         | -3.9398444        | 4.2808061        | -0.0049353     | 0.1928912             | torch.Size([2, 512, 11])         |
| 1656    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.add2                               | input_1             | torch.float32 |         | -53.5874481       | 53.7214432       | 0.2118326      | 78.6301498            | torch.Size([2, 512, 11])         |
| 1656    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.20.add2                               | output              | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1657    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(2)                                   | input               | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1657    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(2)                                   | output              | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1658    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1658    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -53.5869904       | 53.6926079       | 0.7796978      | 289.6354675           | torch.Size([2, 512, 3])          |
| 1659    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(4)                   | input               | torch.float32 |         | -53.5869904       | 53.6926079       | 0.7796978      | 289.6354675           | torch.Size([2, 512, 3])          |
| 1659    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(4)                   | weight              | torch.float32 |         | -0.9216561        | 0.9167990        | -0.0046354     | 0.1373587             | torch.Size([128, 3])             |
| 1659    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(4)                   | bias                | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3650480             | torch.Size([128])                |
| 1659    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(4)                   | output              | torch.float32 |         | -33.0851631       | 34.6782722       | -0.1162824     | 70.9203110            | torch.Size([2, 512, 128])        |
| 1660    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(4)                   | input               | torch.float32 |         | 0.0000000         | 34.6782722       | 2.8891182      | 26.0066757            | torch.Size([2, 512, 128])        |
| 1660    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(4)                   | output              | torch.float32 |         | 0.0000000         | 34.6782722       | 2.8891182      | 26.0066757            | torch.Size([2, 512, 128])        |
| 1661    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 34.6782722       | 2.8891182      | 26.0066757            | torch.Size([2, 512, 128])        |
| 1661    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(4)   | output              | torch.float32 |         | 0.2405998         | 7.3202400        | 2.8891182      | 4.0425978             | torch.Size([2, 512, 1])          |
| 1662    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 34.6782722       | 2.8891182      | 26.0066757            | torch.Size([2, 512, 128])        |
| 1662    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(4)               | input_1             | torch.float32 |         | 0.2405998         | 7.3202400        | 2.8891182      | 4.0425978             | torch.Size([2, 512, 1])          |
| 1662    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(4)               | output              | torch.float32 |         | -7.3202400        | 29.0596771       | 0.0000000      | 21.9679966            | torch.Size([2, 512, 128])        |
| 1663    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(4)               | input_0             | torch.float32 |         | -7.3202400        | 29.0596771       | 0.0000000      | 21.9679966            | torch.Size([2, 512, 128])        |
| 1663    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(4)               | input_1             | torch.float32 |         | -7.3202400        | 29.0596771       | 0.0000000      | 21.9679966            | torch.Size([2, 512, 128])        |
| 1663    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(4)               | output              | torch.float32 |         | 0.0000000         | 844.4648438      | 21.9678268     | 2623.7802734          | torch.Size([2, 512, 128])        |
| 1664    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 844.4648438      | 21.9678268     | 2623.7802734          | torch.Size([2, 512, 128])        |
| 1664    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(4)     | output              | torch.float32 |         | 0.1145778         | 75.3773499       | 21.9678288     | 466.1291809           | torch.Size([2, 512, 1])          |
| 1665    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(4)             | input               | torch.float32 |         | 0.1145778         | 75.3773499       | 21.9678288     | 466.1291809           | torch.Size([2, 512, 1])          |
| 1665    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(4)             | output              | torch.float32 |         | 0.1151807         | 2.9541383        | 0.8982761      | 1.2600825             | torch.Size([2, 512, 1])          |
| 1666    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(4)           | input_0             | torch.float32 |         | -7.3202400        | 29.0596771       | 0.0000000      | 21.9679966            | torch.Size([2, 512, 128])        |
| 1666    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(4)           | input_1             | torch.float32 |         | 0.1151807         | 2.9541383        | 0.8982761      | 1.2600825             | torch.Size([2, 512, 1])          |
| 1666    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(4)           | output              | torch.float32 |         | -0.8848790        | 3.9874239        | 0.0000000      | 0.9999869             | torch.Size([2, 512, 128])        |
| 1667    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(4)      | input               | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 1667    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(4)      | output              | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 1668    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(4)        | input_0             | torch.float32 |         | -0.8848790        | 3.9874239        | 0.0000000      | 0.9999869             | torch.Size([2, 512, 128])        |
| 1668    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(4)        | input_1             | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 1668    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(4)        | output              | torch.float32 |         | -1.0499369        | 5.0116844        | -0.0007688     | 0.9496375             | torch.Size([2, 512, 128])        |
| 1669    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(4)        | input               | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 1669    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(4)        | output              | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 1670    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(4)          | input_0             | torch.float32 |         | -1.0499369        | 5.0116844        | -0.0007688     | 0.9496375             | torch.Size([2, 512, 128])        |
| 1670    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(4)          | input_1             | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 1670    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(4)          | output              | torch.float32 |         | -1.0502331        | 5.0079255        | 0.0080516      | 0.9429464             | torch.Size([2, 512, 128])        |
| 1671    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(4)                   | input               | torch.float32 |         | -1.0502331        | 5.0079255        | 0.0080516      | 0.9429464             | torch.Size([2, 512, 128])        |
| 1671    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(4)                   | weight              | torch.float32 |         | -0.3750711        | 0.3968706        | 0.0019093      | 0.0048458             | torch.Size([128, 128])           |
| 1671    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(4)                   | bias                | torch.float32 |         | -0.1863807        | 0.1385574        | -0.0156467     | 0.0047256             | torch.Size([128])                |
| 1671    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(4)                   | output              | torch.float32 |         | -7.7676930        | 6.7862263        | -0.0848704     | 3.1610088             | torch.Size([2, 512, 128])        |
| 1672    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(4)                   | input               | torch.float32 |         | 0.0000000         | 6.7862263        | 0.6247076      | 1.1319603             | torch.Size([2, 512, 128])        |
| 1672    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(4)                   | output              | torch.float32 |         | 0.0000000         | 6.7862263        | 0.6247076      | 1.1319603             | torch.Size([2, 512, 128])        |
| 1673    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 6.7862263        | 0.6247076      | 1.1319603             | torch.Size([2, 512, 128])        |
| 1673    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(4)   | output              | torch.float32 |         | 0.2880173         | 1.3017673        | 0.6247076      | 0.1418730             | torch.Size([2, 512, 1])          |
| 1674    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 6.7862263        | 0.6247076      | 1.1319603             | torch.Size([2, 512, 128])        |
| 1674    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(4)               | input_1             | torch.float32 |         | 0.2880173         | 1.3017673        | 0.6247076      | 0.1418730             | torch.Size([2, 512, 1])          |
| 1674    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(4)               | output              | torch.float32 |         | -1.3017673        | 5.7001185        | -0.0000000     | 0.9902247             | torch.Size([2, 512, 128])        |
| 1675    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(4)               | input_0             | torch.float32 |         | -1.3017673        | 5.7001185        | -0.0000000     | 0.9902247             | torch.Size([2, 512, 128])        |
| 1675    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(4)               | input_1             | torch.float32 |         | -1.3017673        | 5.7001185        | -0.0000000     | 0.9902247             | torch.Size([2, 512, 128])        |
| 1675    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(4)               | output              | torch.float32 |         | 0.0000000         | 32.4913521       | 0.9902171      | 6.0653868             | torch.Size([2, 512, 128])        |
| 1676    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 32.4913521       | 0.9902171      | 6.0653868             | torch.Size([2, 512, 128])        |
| 1676    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(4)     | output              | torch.float32 |         | 0.3048886         | 2.6165431        | 0.9902171      | 0.7414739             | torch.Size([2, 512, 1])          |
| 1677    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(4)             | input               | torch.float32 |         | 0.3048886         | 2.6165431        | 0.9902171      | 0.7414739             | torch.Size([2, 512, 1])          |
| 1677    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(4)             | output              | torch.float32 |         | 0.6182088         | 1.8110160        | 1.2479594      | 0.1497111             | torch.Size([2, 512, 1])          |
| 1678    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(4)           | input_0             | torch.float32 |         | -1.3017673        | 5.7001185        | -0.0000000     | 0.9902247             | torch.Size([2, 512, 128])        |
| 1678    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(4)           | input_1             | torch.float32 |         | 0.6182088         | 1.8110160        | 1.2479594      | 0.1497111             | torch.Size([2, 512, 1])          |
| 1678    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(4)           | output              | torch.float32 |         | -0.8047641        | 7.0598316        | -0.0000000     | 0.9999906             | torch.Size([2, 512, 128])        |
| 1679    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(4)      | input               | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 1679    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(4)      | output              | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 1680    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(4)        | input_0             | torch.float32 |         | -0.8047641        | 7.0598316        | -0.0000000     | 0.9999906             | torch.Size([2, 512, 128])        |
| 1680    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(4)        | input_1             | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 1680    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(4)        | output              | torch.float32 |         | -0.9554208        | 6.9371815        | 0.0342439      | 0.9460430             | torch.Size([2, 512, 128])        |
| 1681    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(4)        | input               | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 1681    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(4)        | output              | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 1682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(4)          | input_0             | torch.float32 |         | -0.9554208        | 6.9371815        | 0.0342439      | 0.9460430             | torch.Size([2, 512, 128])        |
| 1682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(4)          | input_1             | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 1682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(4)          | output              | torch.float32 |         | -0.9736363        | 6.9336371        | 0.0660462      | 0.9214456             | torch.Size([2, 512, 128])        |
| 1683    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(4)                   | input               | torch.float32 |         | -0.9736363        | 6.9336371        | 0.0660462      | 0.9214456             | torch.Size([2, 512, 128])        |
| 1683    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(4)                   | weight              | torch.float32 |         | -0.7504157        | 0.4182976        | -0.0024651     | 0.0052447             | torch.Size([128, 128])           |
| 1683    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(4)                   | bias                | torch.float32 |         | -0.1397866        | 0.1210779        | 0.0064616      | 0.0040949             | torch.Size([128])                |
| 1683    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(4)                   | output              | torch.float32 |         | -9.0948601        | 6.9197593        | -0.0378855     | 4.6700883             | torch.Size([2, 512, 128])        |
| 1684    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(4)                   | input               | torch.float32 |         | 0.0000000         | 6.9197593        | 0.8120890      | 1.5022482             | torch.Size([2, 512, 128])        |
| 1684    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(4)                   | output              | torch.float32 |         | 0.0000000         | 6.9197593        | 0.8120890      | 1.5022482             | torch.Size([2, 512, 128])        |
| 1685    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 6.9197593        | 0.8120890      | 1.5022482             | torch.Size([2, 512, 128])        |
| 1685    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(4)   | output              | torch.float32 |         | 0.5493186         | 1.2353010        | 0.8120889      | 0.0591440             | torch.Size([2, 512, 1])          |
| 1686    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 6.9197593        | 0.8120890      | 1.5022482             | torch.Size([2, 512, 128])        |
| 1686    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(4)               | input_1             | torch.float32 |         | 0.5493186         | 1.2353010        | 0.8120889      | 0.0591440             | torch.Size([2, 512, 1])          |
| 1686    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(4)               | output              | torch.float32 |         | -1.2353010        | 6.1397886        | 0.0000000      | 1.4431615             | torch.Size([2, 512, 128])        |
| 1687    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(4)               | input_0             | torch.float32 |         | -1.2353010        | 6.1397886        | 0.0000000      | 1.4431615             | torch.Size([2, 512, 128])        |
| 1687    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(4)               | input_1             | torch.float32 |         | -1.2353010        | 6.1397886        | 0.0000000      | 1.4431615             | torch.Size([2, 512, 128])        |
| 1687    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(4)               | output              | torch.float32 |         | 0.0000000         | 37.6970062       | 1.4431504      | 8.9295530             | torch.Size([2, 512, 128])        |
| 1688    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 37.6970062       | 1.4431504      | 8.9295530             | torch.Size([2, 512, 128])        |
| 1688    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(4)     | output              | torch.float32 |         | 0.8217989         | 2.5569315        | 1.4431505      | 0.3999384             | torch.Size([2, 512, 1])          |
| 1689    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(4)             | input               | torch.float32 |         | 0.8217989         | 2.5569315        | 1.4431505      | 0.3999384             | torch.Size([2, 512, 1])          |
| 1689    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(4)             | output              | torch.float32 |         | 0.6253737         | 1.1030992        | 0.8824121      | 0.0243522             | torch.Size([2, 512, 1])          |
| 1690    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(4)           | input_0             | torch.float32 |         | -1.2353010        | 6.1397886        | 0.0000000      | 1.4431615             | torch.Size([2, 512, 128])        |
| 1690    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(4)           | input_1             | torch.float32 |         | 0.6253737         | 1.1030992        | 0.8824121      | 0.0243522             | torch.Size([2, 512, 1])          |
| 1690    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(4)           | output              | torch.float32 |         | -0.7734432        | 5.0278368        | 0.0000000      | 0.9999996             | torch.Size([2, 512, 128])        |
| 1691    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(4)      | input               | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 1691    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(4)      | output              | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 1692    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(4)        | input_0             | torch.float32 |         | -0.7734432        | 5.0278368        | 0.0000000      | 0.9999996             | torch.Size([2, 512, 128])        |
| 1692    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(4)        | input_1             | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 1692    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(4)        | output              | torch.float32 |         | -0.8701089        | 5.1785192        | 0.0157741      | 0.9931597             | torch.Size([2, 512, 128])        |
| 1693    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(4)        | input               | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 1693    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(4)        | output              | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 1694    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(4)          | input_0             | torch.float32 |         | -0.8701089        | 5.1785192        | 0.0157741      | 0.9931597             | torch.Size([2, 512, 128])        |
| 1694    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(4)          | input_1             | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 1694    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(4)          | output              | torch.float32 |         | -0.8578088        | 5.2034421        | 0.0374120      | 0.9793231             | torch.Size([2, 512, 128])        |
| 1695    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(4)                   | input               | torch.float32 |         | -0.8578088        | 5.2034421        | 0.0374120      | 0.9793231             | torch.Size([2, 512, 128])        |
| 1695    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(4)                   | weight              | torch.float32 |         | -0.4264432        | 0.3183554        | 0.0005866      | 0.0053991             | torch.Size([128, 128])           |
| 1695    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(4)                   | bias                | torch.float32 |         | -0.1690418        | 0.1536980        | -0.0166056     | 0.0039884             | torch.Size([128])                |
| 1695    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(4)                   | output              | torch.float32 |         | -11.5848827       | 10.2659721       | -0.4108323     | 4.4060264             | torch.Size([2, 512, 128])        |
| 1696    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(4)                  | input               | torch.float32 |         | 0.0000000         | 10.2659721       | 0.6254585      | 1.5355884             | torch.Size([2, 512, 128])        |
| 1696    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(4)                  | output              | torch.float32 |         | 0.0000000         | 10.2659721       | 0.6254585      | 1.5355884             | torch.Size([2, 512, 128])        |
| 1697    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(4)  | input_0             | torch.float32 |         | 0.0000000         | 10.2659721       | 0.6254585      | 1.5355884             | torch.Size([2, 512, 128])        |
| 1697    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(4)  | output              | torch.float32 |         | 0.5253450         | 0.7345474        | 0.6254585      | 0.0019280             | torch.Size([2, 512, 1])          |
| 1698    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(4)              | input_0             | torch.float32 |         | 0.0000000         | 10.2659721       | 0.6254585      | 1.5355884             | torch.Size([2, 512, 128])        |
| 1698    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(4)              | input_1             | torch.float32 |         | 0.5253450         | 0.7345474        | 0.6254585      | 0.0019280             | torch.Size([2, 512, 1])          |
| 1698    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(4)              | output              | torch.float32 |         | -0.7345474        | 9.7147322        | 0.0000000      | 1.5336622             | torch.Size([2, 512, 128])        |
| 1699    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(4)              | input_0             | torch.float32 |         | -0.7345474        | 9.7147322        | 0.0000000      | 1.5336622             | torch.Size([2, 512, 128])        |
| 1699    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(4)              | input_1             | torch.float32 |         | -0.7345474        | 9.7147322        | 0.0000000      | 1.5336622             | torch.Size([2, 512, 128])        |
| 1699    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(4)              | output              | torch.float32 |         | 0.0000000         | 94.3760223       | 1.5336505      | 24.4559288            | torch.Size([2, 512, 128])        |
| 1700    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(4)    | input_0             | torch.float32 |         | 0.0000000         | 94.3760223       | 1.5336505      | 24.4559288            | torch.Size([2, 512, 128])        |
| 1700    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(4)    | output              | torch.float32 |         | 1.0567411         | 1.9515458        | 1.5336506      | 0.0486155             | torch.Size([2, 512, 1])          |
| 1701    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(4)            | input               | torch.float32 |         | 1.0567411         | 1.9515458        | 1.5336506      | 0.0486155             | torch.Size([2, 512, 1])          |
| 1701    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(4)            | output              | torch.float32 |         | 0.7158293         | 0.9727777        | 0.8140180      | 0.0036685             | torch.Size([2, 512, 1])          |
| 1702    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(4)          | input_0             | torch.float32 |         | -0.7345474        | 9.7147322        | 0.0000000      | 1.5336622             | torch.Size([2, 512, 128])        |
| 1702    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(4)          | input_1             | torch.float32 |         | 0.7158293         | 0.9727777        | 0.8140180      | 0.0036685             | torch.Size([2, 512, 1])          |
| 1702    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(4)          | output              | torch.float32 |         | -0.5930752        | 7.3766050        | 0.0000000      | 1.0000010             | torch.Size([2, 512, 128])        |
| 1703    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(4)     | input               | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 1703    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(4)     | output              | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 1704    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(4)       | input_0             | torch.float32 |         | -0.5930752        | 7.3766050        | 0.0000000      | 1.0000010             | torch.Size([2, 512, 128])        |
| 1704    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(4)       | input_1             | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 1704    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(4)       | output              | torch.float32 |         | -0.8265768        | 7.4513373        | 0.0092755      | 0.9030203             | torch.Size([2, 512, 128])        |
| 1705    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(4)       | input               | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 1705    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(4)       | output              | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 1706    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(4)         | input_0             | torch.float32 |         | -0.8265768        | 7.4513373        | 0.0092755      | 0.9030203             | torch.Size([2, 512, 128])        |
| 1706    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(4)         | input_1             | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 1706    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(4)         | output              | torch.float32 |         | -0.8306979        | 7.4040437        | 0.0712658      | 0.8670998             | torch.Size([2, 512, 128])        |
| 1707    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1707    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -1.0019287        | 2.6323998        | 0.2514051      | 0.3778867             | torch.Size([2, 512, 3])          |
| 1708    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(4)                  | input               | torch.float32 |         | -1.0019287        | 2.6323998        | 0.2514051      | 0.3778867             | torch.Size([2, 512, 3])          |
| 1708    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(4)                  | weight              | torch.float32 |         | -0.8288664        | 0.6362330        | 0.0683853      | 0.1118651             | torch.Size([32, 3])              |
| 1708    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(4)                  | bias                | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1068659             | torch.Size([32])                 |
| 1708    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(4)                  | output              | torch.float32 |         | -1.8684095        | 2.2383544        | 0.1131107      | 0.2332458             | torch.Size([2, 512, 32])         |
| 1709    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(4)                  | input               | torch.float32 |         | 0.0000000         | 2.2383544        | 0.2551126      | 0.0975269             | torch.Size([2, 512, 32])         |
| 1709    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(4)                  | output              | torch.float32 |         | 0.0000000         | 2.2383544        | 0.2551126      | 0.0975269             | torch.Size([2, 512, 32])         |
| 1710    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(4)  | input_0             | torch.float32 |         | 0.0000000         | 2.2383544        | 0.2551126      | 0.0975269             | torch.Size([2, 512, 32])         |
| 1710    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(4)  | output              | torch.float32 |         | 0.1596419         | 0.6466667        | 0.2551126      | 0.0127192             | torch.Size([2, 512, 1])          |
| 1711    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(4)              | input_0             | torch.float32 |         | 0.0000000         | 2.2383544        | 0.2551126      | 0.0975269             | torch.Size([2, 512, 32])         |
| 1711    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(4)              | input_1             | torch.float32 |         | 0.1596419         | 0.6466667        | 0.2551126      | 0.0127192             | torch.Size([2, 512, 1])          |
| 1711    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(4)              | output              | torch.float32 |         | -0.6466667        | 1.5916877        | 0.0000000      | 0.0848197             | torch.Size([2, 512, 32])         |
| 1712    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(4)              | input_0             | torch.float32 |         | -0.6466667        | 1.5916877        | 0.0000000      | 0.0848197             | torch.Size([2, 512, 32])         |
| 1712    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(4)              | input_1             | torch.float32 |         | -0.6466667        | 1.5916877        | 0.0000000      | 0.0848197             | torch.Size([2, 512, 32])         |
| 1712    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(4)              | output              | torch.float32 |         | 0.0000000         | 2.5334697        | 0.0848171      | 0.0256075             | torch.Size([2, 512, 32])         |
| 1713    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(4)    | input_0             | torch.float32 |         | 0.0000000         | 2.5334697        | 0.0848171      | 0.0256075             | torch.Size([2, 512, 32])         |
| 1713    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(4)    | output              | torch.float32 |         | 0.0323494         | 0.4038574        | 0.0848171      | 0.0044920             | torch.Size([2, 512, 1])          |
| 1714    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(4)            | input               | torch.float32 |         | 0.0323494         | 0.4038574        | 0.0848171      | 0.0044920             | torch.Size([2, 512, 1])          |
| 1714    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(4)            | output              | torch.float32 |         | 1.5735501         | 5.5590382        | 4.0739403      | 1.4051607             | torch.Size([2, 512, 1])          |
| 1715    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(4)          | input_0             | torch.float32 |         | -0.6466667        | 1.5916877        | 0.0000000      | 0.0848197             | torch.Size([2, 512, 32])         |
| 1715    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(4)          | input_1             | torch.float32 |         | 1.5735501         | 5.5590382        | 4.0739403      | 1.4051607             | torch.Size([2, 512, 1])          |
| 1715    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(4)          | output              | torch.float32 |         | -1.0841513        | 3.0646324        | 0.0000000      | 0.9998506             | torch.Size([2, 512, 32])         |
| 1716    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(4)     | input               | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 1716    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(4)     | output              | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 1717    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(4)       | input_0             | torch.float32 |         | -1.0841513        | 3.0646324        | 0.0000000      | 0.9998506             | torch.Size([2, 512, 32])         |
| 1717    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(4)       | input_1             | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 1717    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(4)       | output              | torch.float32 |         | -1.2903305        | 3.2856221        | 0.0072508      | 0.9879227             | torch.Size([2, 512, 32])         |
| 1718    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(4)       | input               | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 1718    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(4)       | output              | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 1719    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(4)         | input_0             | torch.float32 |         | -1.2903305        | 3.2856221        | 0.0072508      | 0.9879227             | torch.Size([2, 512, 32])         |
| 1719    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(4)         | input_1             | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 1719    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(4)         | output              | torch.float32 |         | -1.2674924        | 3.2820013        | 0.0107770      | 0.9318825             | torch.Size([2, 512, 32])         |
| 1720    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(4)                  | input               | torch.float32 |         | -1.2674924        | 3.2820013        | 0.0107770      | 0.9318825             | torch.Size([2, 512, 32])         |
| 1720    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(4)                  | weight              | torch.float32 |         | -0.5793310        | 0.5422795        | -0.0032135     | 0.0176575             | torch.Size([32, 32])             |
| 1720    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(4)                  | bias                | torch.float32 |         | -0.1716317        | 0.2230143        | 0.0007250      | 0.0126328             | torch.Size([32])                 |
| 1720    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(4)                  | output              | torch.float32 |         | -4.2852831        | 2.1638427        | -0.2127224     | 1.4649713             | torch.Size([2, 512, 32])         |
| 1721    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(4)                  | input               | torch.float32 |         | 0.0000000         | 2.1638427        | 0.3713738      | 0.2668382             | torch.Size([2, 512, 32])         |
| 1721    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(4)                  | output              | torch.float32 |         | 0.0000000         | 2.1638427        | 0.3713738      | 0.2668382             | torch.Size([2, 512, 32])         |
| 1722    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(4)  | input_0             | torch.float32 |         | 0.0000000         | 2.1638427        | 0.3713738      | 0.2668382             | torch.Size([2, 512, 32])         |
| 1722    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(4)  | output              | torch.float32 |         | 0.2668810         | 0.4267110        | 0.3713738      | 0.0013837             | torch.Size([2, 512, 1])          |
| 1723    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(4)              | input_0             | torch.float32 |         | 0.0000000         | 2.1638427        | 0.3713738      | 0.2668382             | torch.Size([2, 512, 32])         |
| 1723    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(4)              | input_1             | torch.float32 |         | 0.2668810         | 0.4267110        | 0.3713738      | 0.0013837             | torch.Size([2, 512, 1])          |
| 1723    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(4)              | output              | torch.float32 |         | -0.4267110        | 1.8349223        | -0.0000000     | 0.2654558             | torch.Size([2, 512, 32])         |
| 1724    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(4)              | input_0             | torch.float32 |         | -0.4267110        | 1.8349223        | -0.0000000     | 0.2654558             | torch.Size([2, 512, 32])         |
| 1724    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(4)              | input_1             | torch.float32 |         | -0.4267110        | 1.8349223        | -0.0000000     | 0.2654558             | torch.Size([2, 512, 32])         |
| 1724    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(4)              | output              | torch.float32 |         | 0.0000000         | 3.3669398        | 0.2654477      | 0.2139698             | torch.Size([2, 512, 32])         |
| 1725    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(4)    | input_0             | torch.float32 |         | 0.0000000         | 3.3669398        | 0.2654477      | 0.2139698             | torch.Size([2, 512, 32])         |
| 1725    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(4)    | output              | torch.float32 |         | 0.1538555         | 0.3724569        | 0.2654477      | 0.0051813             | torch.Size([2, 512, 1])          |
| 1726    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(4)            | input               | torch.float32 |         | 0.1538555         | 0.3724569        | 0.2654477      | 0.0051813             | torch.Size([2, 512, 1])          |
| 1726    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(4)            | output              | torch.float32 |         | 1.6385366         | 2.5493493        | 2.0005255      | 0.0870199             | torch.Size([2, 512, 1])          |
| 1727    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(4)          | input_0             | torch.float32 |         | -0.4267110        | 1.8349223        | -0.0000000     | 0.2654558             | torch.Size([2, 512, 32])         |
| 1727    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(4)          | input_1             | torch.float32 |         | 1.6385366         | 2.5493493        | 2.0005255      | 0.0870199             | torch.Size([2, 512, 1])          |
| 1727    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(4)          | output              | torch.float32 |         | -0.9098256        | 3.8903124        | -0.0000000     | 0.9999896             | torch.Size([2, 512, 32])         |
| 1728    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(4)     | input               | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 1728    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(4)     | output              | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 1729    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(4)       | input_0             | torch.float32 |         | -0.9098256        | 3.8903124        | -0.0000000     | 0.9999896             | torch.Size([2, 512, 32])         |
| 1729    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(4)       | input_1             | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 1729    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(4)       | output              | torch.float32 |         | -0.9179587        | 3.7175899        | 0.0103734      | 0.9980138             | torch.Size([2, 512, 32])         |
| 1730    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(4)       | input               | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 1730    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(4)       | output              | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 1731    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(4)         | input_0             | torch.float32 |         | -0.9179587        | 3.7175899        | 0.0103734      | 0.9980138             | torch.Size([2, 512, 32])         |
| 1731    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(4)         | input_1             | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 1731    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(4)         | output              | torch.float32 |         | -0.8995520        | 3.7456124        | 0.0201355      | 0.9681129             | torch.Size([2, 512, 32])         |
| 1732    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(4)                  | input               | torch.float32 |         | -0.8995520        | 3.7456124        | 0.0201355      | 0.9681129             | torch.Size([2, 512, 32])         |
| 1732    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(4)                  | weight              | torch.float32 |         | -0.5712157        | 0.5219681        | -0.0062917     | 0.0166056             | torch.Size([32, 32])             |
| 1732    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(4)                  | bias                | torch.float32 |         | -0.1649730        | 0.2318604        | 0.0253026      | 0.0136139             | torch.Size([32])                 |
| 1732    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(4)                  | output              | torch.float32 |         | -4.3665991        | 2.6423812        | -0.1820364     | 1.3777862             | torch.Size([2, 512, 32])         |
| 1733    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(4)                  | input               | torch.float32 |         | 0.0000000         | 2.6423812        | 0.3720435      | 0.2762378             | torch.Size([2, 512, 32])         |
| 1733    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(4)                  | output              | torch.float32 |         | 0.0000000         | 2.6423812        | 0.3720435      | 0.2762378             | torch.Size([2, 512, 32])         |
| 1734    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(4)  | input_0             | torch.float32 |         | 0.0000000         | 2.6423812        | 0.3720435      | 0.2762378             | torch.Size([2, 512, 32])         |
| 1734    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(4)  | output              | torch.float32 |         | 0.1874149         | 0.4854755        | 0.3720435      | 0.0098265             | torch.Size([2, 512, 1])          |
| 1735    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(4)              | input_0             | torch.float32 |         | 0.0000000         | 2.6423812        | 0.3720435      | 0.2762378             | torch.Size([2, 512, 32])         |
| 1735    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(4)              | input_1             | torch.float32 |         | 0.1874149         | 0.4854755        | 0.3720435      | 0.0098265             | torch.Size([2, 512, 1])          |
| 1735    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(4)              | output              | torch.float32 |         | -0.4854755        | 2.1963472        | -0.0000000     | 0.2664205             | torch.Size([2, 512, 32])         |
| 1736    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(4)              | input_0             | torch.float32 |         | -0.4854755        | 2.1963472        | -0.0000000     | 0.2664205             | torch.Size([2, 512, 32])         |
| 1736    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(4)              | input_1             | torch.float32 |         | -0.4854755        | 2.1963472        | -0.0000000     | 0.2664205             | torch.Size([2, 512, 32])         |
| 1736    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(4)              | output              | torch.float32 |         | 0.0000000         | 4.8239412        | 0.2664124      | 0.2814806             | torch.Size([2, 512, 32])         |
| 1737    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(4)    | input_0             | torch.float32 |         | 0.0000000         | 4.8239412        | 0.2664124      | 0.2814806             | torch.Size([2, 512, 32])         |
| 1737    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(4)    | output              | torch.float32 |         | 0.1382510         | 0.3970056        | 0.2664124      | 0.0057127             | torch.Size([2, 512, 1])          |
| 1738    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(4)            | input               | torch.float32 |         | 0.1382510         | 0.3970056        | 0.2664124      | 0.0057127             | torch.Size([2, 512, 1])          |
| 1738    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(4)            | output              | torch.float32 |         | 1.5870703         | 2.6893673        | 2.0140872      | 0.1251043             | torch.Size([2, 512, 1])          |
| 1739    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(4)          | input_0             | torch.float32 |         | -0.4854755        | 2.1963472        | -0.0000000     | 0.2664205             | torch.Size([2, 512, 32])         |
| 1739    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(4)          | input_1             | torch.float32 |         | 1.5870703         | 2.6893673        | 2.0140872      | 0.1251043             | torch.Size([2, 512, 1])          |
| 1739    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(4)          | output              | torch.float32 |         | -0.9470318        | 3.7779000        | -0.0000000     | 0.9999887             | torch.Size([2, 512, 32])         |
| 1740    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(4)     | input               | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 1740    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(4)     | output              | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 1741    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(4)       | input_0             | torch.float32 |         | -0.9470318        | 3.7779000        | -0.0000000     | 0.9999887             | torch.Size([2, 512, 32])         |
| 1741    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(4)       | input_1             | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 1741    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(4)       | output              | torch.float32 |         | -1.0716120        | 4.0503225        | 0.0053315      | 1.0240438             | torch.Size([2, 512, 32])         |
| 1742    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(4)       | input               | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 1742    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(4)       | output              | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 1743    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(4)         | input_0             | torch.float32 |         | -1.0716120        | 4.0503225        | 0.0053315      | 1.0240438             | torch.Size([2, 512, 32])         |
| 1743    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(4)         | input_1             | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 1743    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(4)         | output              | torch.float32 |         | -1.0405313        | 4.0752511        | 0.0095277      | 1.0005164             | torch.Size([2, 512, 32])         |
| 1744    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(4)                  | input               | torch.float32 |         | -1.0405313        | 4.0752511        | 0.0095277      | 1.0005164             | torch.Size([2, 512, 32])         |
| 1744    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(4)                  | weight              | torch.float32 |         | -0.3204980        | 0.3365203        | -0.0020388     | 0.0145364             | torch.Size([32, 32])             |
| 1744    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(4)                  | bias                | torch.float32 |         | -0.1559148        | 0.2119379        | 0.0091616      | 0.0105488             | torch.Size([32])                 |
| 1744    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(4)                  | output              | torch.float32 |         | -2.2935290        | 2.6693156        | 0.0151856      | 0.8149231             | torch.Size([2, 512, 32])         |
| 1745    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(4)                 | input               | torch.float32 |         | 0.0000000         | 2.6693156        | 0.3645043      | 0.2887010             | torch.Size([2, 512, 32])         |
| 1745    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(4)                 | output              | torch.float32 |         | 0.0000000         | 2.6693156        | 0.3645043      | 0.2887010             | torch.Size([2, 512, 32])         |
| 1746    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(4) | input_0             | torch.float32 |         | 0.0000000         | 2.6693156        | 0.3645043      | 0.2887010             | torch.Size([2, 512, 32])         |
| 1746    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(4) | output              | torch.float32 |         | 0.2665875         | 0.5697377        | 0.3645044      | 0.0025171             | torch.Size([2, 512, 1])          |
| 1747    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(4)             | input_0             | torch.float32 |         | 0.0000000         | 2.6693156        | 0.3645043      | 0.2887010             | torch.Size([2, 512, 32])         |
| 1747    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(4)             | input_1             | torch.float32 |         | 0.2665875         | 0.5697377        | 0.3645044      | 0.0025171             | torch.Size([2, 512, 1])          |
| 1747    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(4)             | output              | torch.float32 |         | -0.5697377        | 2.2603502        | -0.0000000     | 0.2861863             | torch.Size([2, 512, 32])         |
| 1748    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(4)             | input_0             | torch.float32 |         | -0.5697377        | 2.2603502        | -0.0000000     | 0.2861863             | torch.Size([2, 512, 32])         |
| 1748    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(4)             | input_1             | torch.float32 |         | -0.5697377        | 2.2603502        | -0.0000000     | 0.2861863             | torch.Size([2, 512, 32])         |
| 1748    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(4)             | output              | torch.float32 |         | 0.0000000         | 5.1091833        | 0.2861776      | 0.3768575             | torch.Size([2, 512, 32])         |
| 1749    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 5.1091833        | 0.2861776      | 0.3768575             | torch.Size([2, 512, 32])         |
| 1749    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(4)   | output              | torch.float32 |         | 0.1788889         | 0.4090395        | 0.2861776      | 0.0015085             | torch.Size([2, 512, 1])          |
| 1750    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(4)           | input               | torch.float32 |         | 0.1788889         | 0.4090395        | 0.2861776      | 0.0015085             | torch.Size([2, 512, 1])          |
| 1750    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(4)           | output              | torch.float32 |         | 1.5635511         | 2.3642650        | 1.8821390      | 0.0163555             | torch.Size([2, 512, 1])          |
| 1751    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(4)         | input_0             | torch.float32 |         | -0.5697377        | 2.2603502        | -0.0000000     | 0.2861863             | torch.Size([2, 512, 32])         |
| 1751    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(4)         | input_1             | torch.float32 |         | 1.5635511         | 2.3642650        | 1.8821390      | 0.0163555             | torch.Size([2, 512, 1])          |
| 1751    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(4)         | output              | torch.float32 |         | -1.0618356        | 3.9582007        | -0.0000000     | 0.9999948             | torch.Size([2, 512, 32])         |
| 1752    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(4)    | input               | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 1752    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(4)    | output              | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 1753    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(4)      | input_0             | torch.float32 |         | -1.0618356        | 3.9582007        | -0.0000000     | 0.9999948             | torch.Size([2, 512, 32])         |
| 1753    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(4)      | input_1             | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 1753    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(4)      | output              | torch.float32 |         | -1.7636091        | 4.9802914        | -0.0354239     | 1.4283947             | torch.Size([2, 512, 32])         |
| 1754    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(4)      | input               | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 1754    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(4)      | output              | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 1755    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(4)        | input_0             | torch.float32 |         | -1.7636091        | 4.9802914        | -0.0354239     | 1.4283947             | torch.Size([2, 512, 32])         |
| 1755    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(4)        | input_1             | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 1755    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(4)        | output              | torch.float32 |         | -1.7142470        | 4.8672161        | 0.0091446      | 1.3390660             | torch.Size([2, 512, 32])         |
| 1756    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1756    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -1.0265582        | 1.0349777        | -0.0453915     | 0.1095833             | torch.Size([2, 512, 2])          |
| 1757    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(4)                   | input               | torch.float32 |         | -1.0265582        | 1.0349777        | -0.0453915     | 0.1095833             | torch.Size([2, 512, 2])          |
| 1757    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(4)                   | weight              | torch.float32 |         | -0.7023237        | 0.7394427        | 0.0490668      | 0.1972211             | torch.Size([32, 2])              |
| 1757    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(4)                   | bias                | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1641774             | torch.Size([32])                 |
| 1757    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(4)                   | output              | torch.float32 |         | -1.5118871        | 1.1791317        | -0.1219721     | 0.2015103             | torch.Size([2, 512, 32])         |
| 1758    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(4)                   | input               | torch.float32 |         | 0.0000000         | 1.1791317        | 0.1340322      | 0.0555829             | torch.Size([2, 512, 32])         |
| 1758    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(4)                   | output              | torch.float32 |         | 0.0000000         | 1.1791317        | 0.1340322      | 0.0555829             | torch.Size([2, 512, 32])         |
| 1759    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 1.1791317        | 0.1340322      | 0.0555829             | torch.Size([2, 512, 32])         |
| 1759    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(4)   | output              | torch.float32 |         | 0.1083490         | 0.2326485        | 0.1340322      | 0.0006989             | torch.Size([2, 512, 1])          |
| 1760    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 1.1791317        | 0.1340322      | 0.0555829             | torch.Size([2, 512, 32])         |
| 1760    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(4)               | input_1             | torch.float32 |         | 0.1083490         | 0.2326485        | 0.1340322      | 0.0006989             | torch.Size([2, 512, 1])          |
| 1760    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(4)               | output              | torch.float32 |         | -0.2326485        | 0.9569517        | 0.0000000      | 0.0548847             | torch.Size([2, 512, 32])         |
| 1761    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(4)               | input_0             | torch.float32 |         | -0.2326485        | 0.9569517        | 0.0000000      | 0.0548847             | torch.Size([2, 512, 32])         |
| 1761    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(4)               | input_1             | torch.float32 |         | -0.2326485        | 0.9569517        | 0.0000000      | 0.0548847             | torch.Size([2, 512, 32])         |
| 1761    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(4)               | output              | torch.float32 |         | 0.0000000         | 0.9157566        | 0.0548830      | 0.0113167             | torch.Size([2, 512, 32])         |
| 1762    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 0.9157566        | 0.0548830      | 0.0113167             | torch.Size([2, 512, 32])         |
| 1762    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(4)     | output              | torch.float32 |         | 0.0405978         | 0.1212448        | 0.0548830      | 0.0003296             | torch.Size([2, 512, 1])          |
| 1763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(4)             | input               | torch.float32 |         | 0.0405978         | 0.1212448        | 0.0548830      | 0.0003296             | torch.Size([2, 512, 1])          |
| 1763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(4)             | output              | torch.float32 |         | 2.8717754         | 4.9624381        | 4.3896503      | 0.2660610             | torch.Size([2, 512, 1])          |
| 1764    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(4)           | input_0             | torch.float32 |         | -0.2326485        | 0.9569517        | 0.0000000      | 0.0548847             | torch.Size([2, 512, 32])         |
| 1764    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(4)           | input_1             | torch.float32 |         | 2.8717754         | 4.9624381        | 4.3896503      | 0.2660610             | torch.Size([2, 512, 1])          |
| 1764    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(4)           | output              | torch.float32 |         | -0.7198363        | 4.0028319        | 0.0000000      | 0.9998351             | torch.Size([2, 512, 32])         |
| 1765    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(4)      | input               | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 1765    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(4)      | output              | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 1766    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(4)        | input_0             | torch.float32 |         | -0.7198363        | 4.0028319        | 0.0000000      | 0.9998351             | torch.Size([2, 512, 32])         |
| 1766    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(4)        | input_1             | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 1766    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(4)        | output              | torch.float32 |         | -0.8234826        | 4.3344131        | 0.0039428      | 1.0119954             | torch.Size([2, 512, 32])         |
| 1767    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(4)        | input               | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 1767    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(4)        | output              | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 1768    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(4)          | input_0             | torch.float32 |         | -0.8234826        | 4.3344131        | 0.0039428      | 1.0119954             | torch.Size([2, 512, 32])         |
| 1768    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(4)          | input_1             | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 1768    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(4)          | output              | torch.float32 |         | -0.7904227        | 4.2539515        | 0.0324467      | 0.9327564             | torch.Size([2, 512, 32])         |
| 1769    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(4)                   | input               | torch.float32 |         | -0.7904227        | 4.2539515        | 0.0324467      | 0.9327564             | torch.Size([2, 512, 32])         |
| 1769    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(4)                   | weight              | torch.float32 |         | -1.0547366        | 0.5812716        | 0.0070099      | 0.0187704             | torch.Size([32, 32])             |
| 1769    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(4)                   | bias                | torch.float32 |         | -0.2183180        | 0.1396109        | -0.0140744     | 0.0103446             | torch.Size([32])                 |
| 1769    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(4)                   | output              | torch.float32 |         | -5.3726487        | 1.7059574        | -0.5307595     | 1.4876972             | torch.Size([2, 512, 32])         |
| 1770    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(4)                   | input               | torch.float32 |         | 0.0000000         | 1.7059574        | 0.2312191      | 0.1272562             | torch.Size([2, 512, 32])         |
| 1770    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(4)                   | output              | torch.float32 |         | 0.0000000         | 1.7059574        | 0.2312191      | 0.1272562             | torch.Size([2, 512, 32])         |
| 1771    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 1.7059574        | 0.2312191      | 0.1272562             | torch.Size([2, 512, 32])         |
| 1771    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(4)   | output              | torch.float32 |         | 0.1700357         | 0.3504630        | 0.2312191      | 0.0008318             | torch.Size([2, 512, 1])          |
| 1772    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 1.7059574        | 0.2312191      | 0.1272562             | torch.Size([2, 512, 32])         |
| 1772    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(4)               | input_1             | torch.float32 |         | 0.1700357         | 0.3504630        | 0.2312191      | 0.0008318             | torch.Size([2, 512, 1])          |
| 1772    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(4)               | output              | torch.float32 |         | -0.3504630        | 1.4393016        | 0.0000000      | 0.1264252             | torch.Size([2, 512, 32])         |
| 1773    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(4)               | input_0             | torch.float32 |         | -0.3504630        | 1.4393016        | 0.0000000      | 0.1264252             | torch.Size([2, 512, 32])         |
| 1773    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(4)               | input_1             | torch.float32 |         | -0.3504630        | 1.4393016        | 0.0000000      | 0.1264252             | torch.Size([2, 512, 32])         |
| 1773    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(4)               | output              | torch.float32 |         | 0.0000000         | 2.0715892        | 0.1264213      | 0.0529528             | torch.Size([2, 512, 32])         |
| 1774    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 2.0715892        | 0.1264213      | 0.0529528             | torch.Size([2, 512, 32])         |
| 1774    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(4)     | output              | torch.float32 |         | 0.0773689         | 0.2168600        | 0.1264213      | 0.0005609             | torch.Size([2, 512, 1])          |
| 1775    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(4)             | input               | torch.float32 |         | 0.0773689         | 0.2168600        | 0.1264213      | 0.0005609             | torch.Size([2, 512, 1])          |
| 1775    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(4)             | output              | torch.float32 |         | 2.1473370         | 3.5949152        | 2.8452778      | 0.0588666             | torch.Size([2, 512, 1])          |
| 1776    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(4)           | input_0             | torch.float32 |         | -0.3504630        | 1.4393016        | 0.0000000      | 0.1264252             | torch.Size([2, 512, 32])         |
| 1776    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(4)           | input_1             | torch.float32 |         | 2.1473370         | 3.5949152        | 2.8452778      | 0.0588666             | torch.Size([2, 512, 1])          |
| 1776    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(4)           | output              | torch.float32 |         | -0.8253843        | 3.5705855        | -0.0000000     | 0.9999491             | torch.Size([2, 512, 32])         |
| 1777    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(4)      | input               | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 1777    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(4)      | output              | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 1778    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(4)        | input_0             | torch.float32 |         | -0.8253843        | 3.5705855        | -0.0000000     | 0.9999491             | torch.Size([2, 512, 32])         |
| 1778    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(4)        | input_1             | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 1778    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(4)        | output              | torch.float32 |         | -0.8976916        | 3.6437206        | -0.0021165     | 0.9791886             | torch.Size([2, 512, 32])         |
| 1779    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(4)        | input               | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 1779    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(4)        | output              | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 1780    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(4)          | input_0             | torch.float32 |         | -0.8976916        | 3.6437206        | -0.0021165     | 0.9791886             | torch.Size([2, 512, 32])         |
| 1780    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(4)          | input_1             | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 1780    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(4)          | output              | torch.float32 |         | -0.8385155        | 3.6029043        | 0.0221278      | 0.9234875             | torch.Size([2, 512, 32])         |
| 1781    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(4)                   | input               | torch.float32 |         | -0.8385155        | 3.6029043        | 0.0221278      | 0.9234875             | torch.Size([2, 512, 32])         |
| 1781    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(4)                   | weight              | torch.float32 |         | -0.4480607        | 0.3678726        | 0.0004879      | 0.0160908             | torch.Size([32, 32])             |
| 1781    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(4)                   | bias                | torch.float32 |         | -0.1861591        | 0.1739754        | 0.0155446      | 0.0137690             | torch.Size([32])                 |
| 1781    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(4)                   | output              | torch.float32 |         | -3.6771476        | 2.4028170        | -0.3095681     | 1.5500436             | torch.Size([2, 512, 32])         |
| 1782    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(4)                   | input               | torch.float32 |         | 0.0000000         | 2.4028170        | 0.3344164      | 0.1954678             | torch.Size([2, 512, 32])         |
| 1782    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(4)                   | output              | torch.float32 |         | 0.0000000         | 2.4028170        | 0.3344164      | 0.1954678             | torch.Size([2, 512, 32])         |
| 1783    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 2.4028170        | 0.3344164      | 0.1954678             | torch.Size([2, 512, 32])         |
| 1783    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(4)   | output              | torch.float32 |         | 0.2562374         | 0.4372390        | 0.3344164      | 0.0004731             | torch.Size([2, 512, 1])          |
| 1784    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 2.4028170        | 0.3344164      | 0.1954678             | torch.Size([2, 512, 32])         |
| 1784    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(4)               | input_1             | torch.float32 |         | 0.2562374         | 0.4372390        | 0.3344164      | 0.0004731             | torch.Size([2, 512, 1])          |
| 1784    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(4)               | output              | torch.float32 |         | -0.4372390        | 2.1438529        | -0.0000000     | 0.1949951             | torch.Size([2, 512, 32])         |
| 1785    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(4)               | input_0             | torch.float32 |         | -0.4372390        | 2.1438529        | -0.0000000     | 0.1949951             | torch.Size([2, 512, 32])         |
| 1785    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(4)               | input_1             | torch.float32 |         | -0.4372390        | 2.1438529        | -0.0000000     | 0.1949951             | torch.Size([2, 512, 32])         |
| 1785    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(4)               | output              | torch.float32 |         | 0.0000000         | 4.5961056        | 0.1949892      | 0.1023593             | torch.Size([2, 512, 32])         |
| 1786    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 4.5961056        | 0.1949892      | 0.1023593             | torch.Size([2, 512, 32])         |
| 1786    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(4)     | output              | torch.float32 |         | 0.1579534         | 0.2845918        | 0.1949892      | 0.0003718             | torch.Size([2, 512, 1])          |
| 1787    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(4)             | input               | torch.float32 |         | 0.1579534         | 0.2845918        | 0.1949892      | 0.0003718             | torch.Size([2, 512, 1])          |
| 1787    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(4)             | output              | torch.float32 |         | 1.8744813         | 2.5160649        | 2.2720132      | 0.0104342             | torch.Size([2, 512, 1])          |
| 1788    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(4)           | input_0             | torch.float32 |         | -0.4372390        | 2.1438529        | -0.0000000     | 0.1949951             | torch.Size([2, 512, 32])         |
| 1788    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(4)           | input_1             | torch.float32 |         | 1.8744813         | 2.5160649        | 2.2720132      | 0.0104342             | torch.Size([2, 512, 1])          |
| 1788    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(4)           | output              | torch.float32 |         | -0.8562101        | 4.4204292        | -0.0000000     | 0.9999788             | torch.Size([2, 512, 32])         |
| 1789    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(4)      | input               | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 1789    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(4)      | output              | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 1790    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(4)        | input_0             | torch.float32 |         | -0.8562101        | 4.4204292        | -0.0000000     | 0.9999788             | torch.Size([2, 512, 32])         |
| 1790    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(4)        | input_1             | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 1790    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(4)        | output              | torch.float32 |         | -0.9456356        | 4.7128415        | -0.0040893     | 0.9944656             | torch.Size([2, 512, 32])         |
| 1791    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(4)        | input               | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 1791    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(4)        | output              | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 1792    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(4)          | input_0             | torch.float32 |         | -0.9456356        | 4.7128415        | -0.0040893     | 0.9944656             | torch.Size([2, 512, 32])         |
| 1792    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(4)          | input_1             | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 1792    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(4)          | output              | torch.float32 |         | -0.9441522        | 4.7344475        | 0.0030804      | 0.9706210             | torch.Size([2, 512, 32])         |
| 1793    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(4)                   | input               | torch.float32 |         | -0.9441522        | 4.7344475        | 0.0030804      | 0.9706210             | torch.Size([2, 512, 32])         |
| 1793    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(4)                   | weight              | torch.float32 |         | -0.5597425        | 0.7001730        | 0.0015679      | 0.0160348             | torch.Size([32, 32])             |
| 1793    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(4)                   | bias                | torch.float32 |         | -0.1810580        | 0.1736723        | -0.0279047     | 0.0091159             | torch.Size([32])                 |
| 1793    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(4)                   | output              | torch.float32 |         | -4.3115287        | 3.0659373        | -0.2447872     | 1.2289486             | torch.Size([2, 512, 32])         |
| 1794    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(4)                  | input               | torch.float32 |         | 0.0000000         | 3.0659373        | 0.2862704      | 0.3410122             | torch.Size([2, 512, 32])         |
| 1794    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(4)                  | output              | torch.float32 |         | 0.0000000         | 3.0659373        | 0.2862704      | 0.3410122             | torch.Size([2, 512, 32])         |
| 1795    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(4)  | input_0             | torch.float32 |         | 0.0000000         | 3.0659373        | 0.2862704      | 0.3410122             | torch.Size([2, 512, 32])         |
| 1795    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(4)  | output              | torch.float32 |         | 0.2223254         | 0.3943865        | 0.2862704      | 0.0011112             | torch.Size([2, 512, 1])          |
| 1796    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(4)              | input_0             | torch.float32 |         | 0.0000000         | 3.0659373        | 0.2862704      | 0.3410122             | torch.Size([2, 512, 32])         |
| 1796    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(4)              | input_1             | torch.float32 |         | 0.2223254         | 0.3943865        | 0.2862704      | 0.0011112             | torch.Size([2, 512, 1])          |
| 1796    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(4)              | output              | torch.float32 |         | -0.3943865        | 2.7833216        | -0.0000000     | 0.3399021             | torch.Size([2, 512, 32])         |
| 1797    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(4)              | input_0             | torch.float32 |         | -0.3943865        | 2.7833216        | -0.0000000     | 0.3399021             | torch.Size([2, 512, 32])         |
| 1797    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(4)              | input_1             | torch.float32 |         | -0.3943865        | 2.7833216        | -0.0000000     | 0.3399021             | torch.Size([2, 512, 32])         |
| 1797    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(4)              | output              | torch.float32 |         | 0.0000000         | 7.7468791        | 0.3398917      | 1.1342928             | torch.Size([2, 512, 32])         |
| 1798    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(4)    | input_0             | torch.float32 |         | 0.0000000         | 7.7468791        | 0.3398917      | 1.1342928             | torch.Size([2, 512, 32])         |
| 1798    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(4)    | output              | torch.float32 |         | 0.1406022         | 0.4198106        | 0.3398917      | 0.0057346             | torch.Size([2, 512, 1])          |
| 1799    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(4)            | input               | torch.float32 |         | 0.1406022         | 0.4198106        | 0.3398917      | 0.0057346             | torch.Size([2, 512, 1])          |
| 1799    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(4)            | output              | torch.float32 |         | 1.5433631         | 2.6667886        | 1.7578144      | 0.0627890             | torch.Size([2, 512, 1])          |
| 1800    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(4)          | input_0             | torch.float32 |         | -0.3943865        | 2.7833216        | -0.0000000     | 0.3399021             | torch.Size([2, 512, 32])         |
| 1800    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(4)          | input_1             | torch.float32 |         | 1.5433631         | 2.6667886        | 1.7578144      | 0.0627890             | torch.Size([2, 512, 1])          |
| 1800    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(4)          | output              | torch.float32 |         | -0.7590414        | 4.7811608        | 0.0000000      | 0.9999989             | torch.Size([2, 512, 32])         |
| 1801    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(4)     | input               | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 1801    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(4)     | output              | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 1802    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(4)       | input_0             | torch.float32 |         | -0.7590414        | 4.7811608        | 0.0000000      | 0.9999989             | torch.Size([2, 512, 32])         |
| 1802    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(4)       | input_1             | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 1802    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(4)       | output              | torch.float32 |         | -1.1149061        | 4.0090837        | -0.0632417     | 0.8715194             | torch.Size([2, 512, 32])         |
| 1803    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(4)       | input               | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 1803    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(4)       | output              | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 1804    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(4)         | input_0             | torch.float32 |         | -1.1149061        | 4.0090837        | -0.0632417     | 0.8715194             | torch.Size([2, 512, 32])         |
| 1804    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(4)         | input_1             | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 1804    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(4)         | output              | torch.float32 |         | -0.9325133        | 3.9466233        | 0.0171374      | 0.7831472             | torch.Size([2, 512, 32])         |
| 1805    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1805    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -2.4922700        | 2.8004334        | -0.2422186     | 0.5251814             | torch.Size([2, 512, 3])          |
| 1806    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(4)                   | input               | torch.float32 |         | -2.4922700        | 2.8004334        | -0.2422186     | 0.5251814             | torch.Size([2, 512, 3])          |
| 1806    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(4)                   | weight              | torch.float32 |         | -1.0475703        | 0.9848034        | -0.0054673     | 0.2080412             | torch.Size([64, 3])              |
| 1806    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(4)                   | bias                | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1294928             | torch.Size([64])                 |
| 1806    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(4)                   | output              | torch.float32 |         | -2.7834206        | 2.6326604        | -0.0869361     | 0.3501328             | torch.Size([2, 512, 64])         |
| 1807    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(4)                   | input               | torch.float32 |         | 0.0000000         | 2.6326604        | 0.1853565      | 0.0806077             | torch.Size([2, 512, 64])         |
| 1807    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(4)                   | output              | torch.float32 |         | 0.0000000         | 2.6326604        | 0.1853565      | 0.0806077             | torch.Size([2, 512, 64])         |
| 1808    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 2.6326604        | 0.1853565      | 0.0806077             | torch.Size([2, 512, 64])         |
| 1808    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(4)   | output              | torch.float32 |         | 0.1253546         | 0.6471329        | 0.1853565      | 0.0075716             | torch.Size([2, 512, 1])          |
| 1809    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 2.6326604        | 0.1853565      | 0.0806077             | torch.Size([2, 512, 64])         |
| 1809    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(4)               | input_1             | torch.float32 |         | 0.1253546         | 0.6471329        | 0.1853565      | 0.0075716             | torch.Size([2, 512, 1])          |
| 1809    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(4)               | output              | torch.float32 |         | -0.6471329        | 1.9855275        | -0.0000000     | 0.0730434             | torch.Size([2, 512, 64])         |
| 1810    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(4)               | input_0             | torch.float32 |         | -0.6471329        | 1.9855275        | -0.0000000     | 0.0730434             | torch.Size([2, 512, 64])         |
| 1810    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(4)               | input_1             | torch.float32 |         | -0.6471329        | 1.9855275        | -0.0000000     | 0.0730434             | torch.Size([2, 512, 64])         |
| 1810    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(4)               | output              | torch.float32 |         | 0.0000000         | 3.9423196        | 0.0730423      | 0.0344134             | torch.Size([2, 512, 64])         |
| 1811    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 3.9423196        | 0.0730423      | 0.0344134             | torch.Size([2, 512, 64])         |
| 1811    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(4)     | output              | torch.float32 |         | 0.0269614         | 0.5435583        | 0.0730423      | 0.0049325             | torch.Size([2, 512, 1])          |
| 1812    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(4)             | input               | torch.float32 |         | 0.0269614         | 0.5435583        | 0.0730423      | 0.0049325             | torch.Size([2, 512, 1])          |
| 1812    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(4)             | output              | torch.float32 |         | 1.3563536         | 6.0890346        | 4.6409359      | 2.0320251             | torch.Size([2, 512, 1])          |
| 1813    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(4)           | input_0             | torch.float32 |         | -0.6471329        | 1.9855275        | -0.0000000     | 0.0730434             | torch.Size([2, 512, 64])         |
| 1813    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(4)           | input_1             | torch.float32 |         | 1.3563536         | 6.0890346        | 4.6409359      | 2.0320251             | torch.Size([2, 512, 1])          |
| 1813    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(4)           | output              | torch.float32 |         | -0.8974739        | 3.4721220        | 0.0000000      | 0.9997796             | torch.Size([2, 512, 64])         |
| 1814    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(4)      | input               | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 1814    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(4)      | output              | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 1815    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(4)        | input_0             | torch.float32 |         | -0.8974739        | 3.4721220        | 0.0000000      | 0.9997796             | torch.Size([2, 512, 64])         |
| 1815    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(4)        | input_1             | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 1815    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(4)        | output              | torch.float32 |         | -1.0124661        | 3.3779733        | 0.0116049      | 0.9450673             | torch.Size([2, 512, 64])         |
| 1816    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(4)        | input               | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 1816    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(4)        | output              | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 1817    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(4)          | input_0             | torch.float32 |         | -1.0124661        | 3.3779733        | 0.0116049      | 0.9450673             | torch.Size([2, 512, 64])         |
| 1817    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(4)          | input_1             | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 1817    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(4)          | output              | torch.float32 |         | -1.0061774        | 3.3319767        | 0.0420589      | 0.8542499             | torch.Size([2, 512, 64])         |
| 1818    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(4)                   | input               | torch.float32 |         | -1.0061774        | 3.3319767        | 0.0420589      | 0.8542499             | torch.Size([2, 512, 64])         |
| 1818    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(4)                   | weight              | torch.float32 |         | -0.4523612        | 0.4813256        | -0.0014562     | 0.0096743             | torch.Size([64, 64])             |
| 1818    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(4)                   | bias                | torch.float32 |         | -0.1183558        | 0.2243176        | 0.0150283      | 0.0049289             | torch.Size([64])                 |
| 1818    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(4)                   | output              | torch.float32 |         | -5.3385692        | 2.9520085        | -0.4159112     | 2.1017473             | torch.Size([2, 512, 64])         |
| 1819    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(4)                   | input               | torch.float32 |         | 0.0000000         | 2.9520085        | 0.3215145      | 0.2154614             | torch.Size([2, 512, 64])         |
| 1819    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(4)                   | output              | torch.float32 |         | 0.0000000         | 2.9520085        | 0.3215145      | 0.2154614             | torch.Size([2, 512, 64])         |
| 1820    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 2.9520085        | 0.3215145      | 0.2154614             | torch.Size([2, 512, 64])         |
| 1820    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(4)   | output              | torch.float32 |         | 0.2158876         | 0.5996204        | 0.3215145      | 0.0077988             | torch.Size([2, 512, 1])          |
| 1821    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 2.9520085        | 0.3215145      | 0.2154614             | torch.Size([2, 512, 64])         |
| 1821    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(4)               | input_1             | torch.float32 |         | 0.2158876         | 0.5996204        | 0.3215145      | 0.0077988             | torch.Size([2, 512, 1])          |
| 1821    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(4)               | output              | torch.float32 |         | -0.5996204        | 2.3747790        | 0.0000000      | 0.2076700             | torch.Size([2, 512, 64])         |
| 1822    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(4)               | input_0             | torch.float32 |         | -0.5996204        | 2.3747790        | 0.0000000      | 0.2076700             | torch.Size([2, 512, 64])         |
| 1822    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(4)               | input_1             | torch.float32 |         | -0.5996204        | 2.3747790        | 0.0000000      | 0.2076700             | torch.Size([2, 512, 64])         |
| 1822    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(4)               | output              | torch.float32 |         | 0.0000000         | 5.6395750        | 0.2076669      | 0.2170507             | torch.Size([2, 512, 64])         |
| 1823    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 5.6395750        | 0.2076669      | 0.2170507             | torch.Size([2, 512, 64])         |
| 1823    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(4)     | output              | torch.float32 |         | 0.0826823         | 0.6182311        | 0.2076669      | 0.0102883             | torch.Size([2, 512, 1])          |
| 1824    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(4)             | input               | torch.float32 |         | 0.0826823         | 0.6182311        | 0.2076669      | 0.0102883             | torch.Size([2, 512, 1])          |
| 1824    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(4)             | output              | torch.float32 |         | 1.2718066         | 3.4775021        | 2.4147713      | 0.4255118             | torch.Size([2, 512, 1])          |
| 1825    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(4)           | input_0             | torch.float32 |         | -0.5996204        | 2.3747790        | 0.0000000      | 0.2076700             | torch.Size([2, 512, 64])         |
| 1825    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(4)           | input_1             | torch.float32 |         | 1.2718066         | 3.4775021        | 2.4147713      | 0.4255118             | torch.Size([2, 512, 1])          |
| 1825    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(4)           | output              | torch.float32 |         | -0.8674209        | 4.3023348        | -0.0000000     | 0.9999527             | torch.Size([2, 512, 64])         |
| 1826    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(4)      | input               | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 1826    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(4)      | output              | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 1827    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(4)        | input_0             | torch.float32 |         | -0.8674209        | 4.3023348        | -0.0000000     | 0.9999527             | torch.Size([2, 512, 64])         |
| 1827    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(4)        | input_1             | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 1827    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(4)        | output              | torch.float32 |         | -0.9279024        | 4.1814156        | 0.0042465      | 0.9851189             | torch.Size([2, 512, 64])         |
| 1828    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(4)        | input               | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1828    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(4)        | output              | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1829    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(4)          | input_0             | torch.float32 |         | -0.9279024        | 4.1814156        | 0.0042465      | 0.9851189             | torch.Size([2, 512, 64])         |
| 1829    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(4)          | input_1             | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 1829    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(4)          | output              | torch.float32 |         | -0.8916395        | 4.1580353        | 0.0207408      | 0.9389256             | torch.Size([2, 512, 64])         |
| 1830    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(4)                   | input               | torch.float32 |         | -0.8916395        | 4.1580353        | 0.0207408      | 0.9389256             | torch.Size([2, 512, 64])         |
| 1830    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(4)                   | weight              | torch.float32 |         | -0.5707353        | 0.3620123        | -0.0010372     | 0.0088292             | torch.Size([64, 64])             |
| 1830    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(4)                   | bias                | torch.float32 |         | -0.1720246        | 0.1340137        | -0.0235144     | 0.0050507             | torch.Size([64])                 |
| 1830    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(4)                   | output              | torch.float32 |         | -5.3869367        | 3.7271249        | -0.3484704     | 2.1373534             | torch.Size([2, 512, 64])         |
| 1831    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(4)                   | input               | torch.float32 |         | 0.0000000         | 3.7271249        | 0.4472240      | 0.4941474             | torch.Size([2, 512, 64])         |
| 1831    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(4)                   | output              | torch.float32 |         | 0.0000000         | 3.7271249        | 0.4472240      | 0.4941474             | torch.Size([2, 512, 64])         |
| 1832    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(4)   | input_0             | torch.float32 |         | 0.0000000         | 3.7271249        | 0.4472240      | 0.4941474             | torch.Size([2, 512, 64])         |
| 1832    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(4)   | output              | torch.float32 |         | 0.3583882         | 0.5160842        | 0.4472241      | 0.0029958             | torch.Size([2, 512, 1])          |
| 1833    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(4)               | input_0             | torch.float32 |         | 0.0000000         | 3.7271249        | 0.4472240      | 0.4941474             | torch.Size([2, 512, 64])         |
| 1833    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(4)               | input_1             | torch.float32 |         | 0.3583882         | 0.5160842        | 0.4472241      | 0.0029958             | torch.Size([2, 512, 1])          |
| 1833    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(4)               | output              | torch.float32 |         | -0.5160842        | 3.2306521        | 0.0000000      | 0.4911545             | torch.Size([2, 512, 64])         |
| 1834    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(4)               | input_0             | torch.float32 |         | -0.5160842        | 3.2306521        | 0.0000000      | 0.4911545             | torch.Size([2, 512, 64])         |
| 1834    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(4)               | input_1             | torch.float32 |         | -0.5160842        | 3.2306521        | 0.0000000      | 0.4911545             | torch.Size([2, 512, 64])         |
| 1834    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(4)               | output              | torch.float32 |         | 0.0000000         | 10.4371128       | 0.4911470      | 1.0531884             | torch.Size([2, 512, 64])         |
| 1835    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(4)     | input_0             | torch.float32 |         | 0.0000000         | 10.4371128       | 0.4911470      | 1.0531884             | torch.Size([2, 512, 64])         |
| 1835    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(4)     | output              | torch.float32 |         | 0.3037764         | 0.7103736        | 0.4911470      | 0.0145153             | torch.Size([2, 512, 1])          |
| 1836    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(4)             | input               | torch.float32 |         | 0.3037764         | 0.7103736        | 0.4911470      | 0.0145153             | torch.Size([2, 512, 1])          |
| 1836    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(4)             | output              | torch.float32 |         | 1.1864611         | 1.8143280        | 1.4652426      | 0.0426806             | torch.Size([2, 512, 1])          |
| 1837    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(4)           | input_0             | torch.float32 |         | -0.5160842        | 3.2306521        | 0.0000000      | 0.4911545             | torch.Size([2, 512, 64])         |
| 1837    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(4)           | input_1             | torch.float32 |         | 1.1864611         | 1.8143280        | 1.4652426      | 0.0426806             | torch.Size([2, 512, 1])          |
| 1837    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(4)           | output              | torch.float32 |         | -0.6999788        | 4.1718450        | 0.0000000      | 0.9999933             | torch.Size([2, 512, 64])         |
| 1838    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(4)      | input               | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1838    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(4)      | output              | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1839    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(4)        | input_0             | torch.float32 |         | -0.6999788        | 4.1718450        | 0.0000000      | 0.9999933             | torch.Size([2, 512, 64])         |
| 1839    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(4)        | input_1             | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 1839    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(4)        | output              | torch.float32 |         | -0.7870592        | 4.3135581        | 0.0058128      | 1.0024225             | torch.Size([2, 512, 64])         |
| 1840    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(4)        | input               | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1840    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(4)        | output              | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1841    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(4)          | input_0             | torch.float32 |         | -0.7870592        | 4.3135581        | 0.0058128      | 1.0024225             | torch.Size([2, 512, 64])         |
| 1841    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(4)          | input_1             | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 1841    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(4)          | output              | torch.float32 |         | -0.7696544        | 4.2991877        | 0.0190956      | 0.9821205             | torch.Size([2, 512, 64])         |
| 1842    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(4)                   | input               | torch.float32 |         | -0.7696544        | 4.2991877        | 0.0190956      | 0.9821205             | torch.Size([2, 512, 64])         |
| 1842    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(4)                   | weight              | torch.float32 |         | -0.5701389        | 0.3477888        | 0.0006721      | 0.0085883             | torch.Size([64, 64])             |
| 1842    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(4)                   | bias                | torch.float32 |         | -0.1677032        | 0.1709885        | -0.0237130     | 0.0070098             | torch.Size([64])                 |
| 1842    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(4)                   | output              | torch.float32 |         | -4.7758894        | 7.2153945        | -0.4923075     | 1.7677264             | torch.Size([2, 512, 64])         |
| 1843    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(4)                  | input               | torch.float32 |         | 0.0000000         | 7.2153945        | 0.2551251      | 0.6723065             | torch.Size([2, 512, 64])         |
| 1843    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(4)                  | output              | torch.float32 |         | 0.0000000         | 7.2153945        | 0.2551251      | 0.6723065             | torch.Size([2, 512, 64])         |
| 1844    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(4)  | input_0             | torch.float32 |         | 0.0000000         | 7.2153945        | 0.2551251      | 0.6723065             | torch.Size([2, 512, 64])         |
| 1844    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(4)  | output              | torch.float32 |         | 0.2026281         | 0.3882470        | 0.2551251      | 0.0017857             | torch.Size([2, 512, 1])          |
| 1845    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(4)              | input_0             | torch.float32 |         | 0.0000000         | 7.2153945        | 0.2551251      | 0.6723065             | torch.Size([2, 512, 64])         |
| 1845    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(4)              | input_1             | torch.float32 |         | 0.2026281         | 0.3882470        | 0.2551251      | 0.0017857             | torch.Size([2, 512, 1])          |
| 1845    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(4)              | output              | torch.float32 |         | -0.3882470        | 7.0098672        | -0.0000000     | 0.6705226             | torch.Size([2, 512, 64])         |
| 1846    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(4)              | input_0             | torch.float32 |         | -0.3882470        | 7.0098672        | -0.0000000     | 0.6705226             | torch.Size([2, 512, 64])         |
| 1846    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(4)              | input_1             | torch.float32 |         | -0.3882470        | 7.0098672        | -0.0000000     | 0.6705226             | torch.Size([2, 512, 64])         |
| 1846    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(4)              | output              | torch.float32 |         | 0.0000000         | 49.1382370       | 0.6705122      | 19.7196655            | torch.Size([2, 512, 64])         |
| 1847    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(4)    | input_0             | torch.float32 |         | 0.0000000         | 49.1382370       | 0.6705122      | 19.7196655            | torch.Size([2, 512, 64])         |
| 1847    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(4)    | output              | torch.float32 |         | 0.3152081         | 0.8266947        | 0.6705123      | 0.0150770             | torch.Size([2, 512, 1])          |
| 1848    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(4)            | input               | torch.float32 |         | 0.3152081         | 0.8266947        | 0.6705123      | 0.0150770             | torch.Size([2, 512, 1])          |
| 1848    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(4)            | output              | torch.float32 |         | 1.0998280         | 1.7811249        | 1.2391703      | 0.0170634             | torch.Size([2, 512, 1])          |
| 1849    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(4)          | input_0             | torch.float32 |         | -0.3882470        | 7.0098672        | -0.0000000     | 0.6705226             | torch.Size([2, 512, 64])         |
| 1849    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(4)          | input_1             | torch.float32 |         | 1.0998280         | 1.7811249        | 1.2391703      | 0.0170634             | torch.Size([2, 512, 1])          |
| 1849    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(4)          | output              | torch.float32 |         | -0.6496291        | 7.7571650        | 0.0000000      | 0.9999998             | torch.Size([2, 512, 64])         |
| 1850    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(4)     | input               | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1850    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(4)     | output              | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1851    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(4)       | input_0             | torch.float32 |         | -0.6496291        | 7.7571650        | 0.0000000      | 0.9999998             | torch.Size([2, 512, 64])         |
| 1851    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(4)       | input_1             | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 1851    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(4)       | output              | torch.float32 |         | -0.8105335        | 5.7053895        | -0.0324572     | 0.7079408             | torch.Size([2, 512, 64])         |
| 1852    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(4)       | input               | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1852    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(4)       | output              | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1853    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(4)         | input_0             | torch.float32 |         | -0.8105335        | 5.7053895        | -0.0324572     | 0.7079408             | torch.Size([2, 512, 64])         |
| 1853    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(4)         | input_1             | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 1853    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(4)         | output              | torch.float32 |         | -0.7907217        | 5.6130829        | 0.0575482      | 0.6250352             | torch.Size([2, 512, 64])         |
| 1854    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_0             | torch.float32 |         | -0.8306979        | 7.4040437        | 0.0712658      | 0.8670998             | torch.Size([2, 512, 128])        |
| 1854    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_1             | torch.float32 |         | -1.7142470        | 4.8672161        | 0.0091446      | 1.3390660             | torch.Size([2, 512, 32])         |
| 1854    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_2             | torch.float32 |         | -0.9325133        | 3.9466233        | 0.0171374      | 0.7831472             | torch.Size([2, 512, 32])         |
| 1854    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | input_3             | torch.float32 |         | -0.7907217        | 5.6130829        | 0.0575482      | 0.6250352             | torch.Size([2, 512, 64])         |
| 1854    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(4)                        | output              | torch.float32 |         | -1.7142470        | 7.4040437        | 0.0533052      | 0.8556479             | torch.Size([2, 512, 256])        |
| 1855    | torch.nn.modules.linear.Linear                                                    | head.fc_before(6)                                 | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 1855    | torch.nn.modules.linear.Linear                                                    | head.fc_before(6)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 1855    | torch.nn.modules.linear.Linear                                                    | head.fc_before(6)                                 | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 1856    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.21.query_cat                          | input_0             | torch.float32 |         | -4.3537626        | 3.6137092        | 0.0041105      | 0.7674054             | torch.Size([2, 512, 256])        |
| 1856    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.21.query_cat                          | input_1             | torch.float32 |         | -1.7142470        | 7.4040437        | 0.0533052      | 0.8556479             | torch.Size([2, 512, 256])        |
| 1856    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.21.query_cat                          | output              | torch.float32 |         | -4.3537626        | 7.4040437        | 0.0287078      | 0.8121301             | torch.Size([2, 512, 512])        |
| 1857    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.21.key_cat                            | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 1857    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.21.key_cat                            | input_1             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0508909      | 0.8514420             | torch.Size([2, 256, 256])        |
| 1857    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.21.key_cat                            | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 1858    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -4.3537626        | 7.4040437        | 0.0287078      | 0.8121301             | torch.Size([2, 512, 512])        |
| 1858    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -4.3537626        | 7.4040437        | 0.0287078      | 0.8121301             | torch.Size([512, 2, 512])        |
| 1859    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 1859    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1860    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 1860    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1861    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | input_0             | torch.float32 |         | -4.3537626        | 7.4040437        | 0.0287078      | 0.8121301             | torch.Size([512, 2, 512])        |
| 1861    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | output              | torch.float32 |         | -4.3537626        | 7.4040437        | 0.0287078      | 0.8121301             | torch.Size([512, 2, 512])        |
| 1862    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1862    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1863    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1863    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1864    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.q_proj                        | input               | torch.float32 |         | -4.3537626        | 7.4040437        | 0.0287078      | 0.8121301             | torch.Size([512, 2, 512])        |
| 1864    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.q_proj                        | weight              | torch.float32 |         | -0.2718778        | 0.2867957        | -0.0000759     | 0.0035608             | torch.Size([512, 512])           |
| 1864    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.q_proj                        | bias                | torch.float32 |         | -0.1191430        | 0.1196405        | 0.0007935      | 0.0012712             | torch.Size([512])                |
| 1864    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.q_proj                        | output              | torch.float32 |         | -15.7925167       | 14.4610510       | 0.0451898      | 12.1617727            | torch.Size([512, 2, 512])        |
| 1865    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.k_proj                        | input               | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 1865    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.k_proj                        | weight              | torch.float32 |         | -0.2869442        | 0.2633475        | 0.0000353      | 0.0036706             | torch.Size([512, 512])           |
| 1865    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.k_proj                        | bias                | torch.float32 |         | -0.0028050        | 0.0033431        | 0.0000168      | 0.0000008             | torch.Size([512])                |
| 1865    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.k_proj                        | output              | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([256, 2, 512])        |
| 1866    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.v_proj                        | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 1866    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.v_proj                        | weight              | torch.float32 |         | -0.1508207        | 0.1581457        | -0.0000932     | 0.0012603             | torch.Size([512, 512])           |
| 1866    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.v_proj                        | bias                | torch.float32 |         | -0.0568344        | 0.0711433        | 0.0019992      | 0.0005089             | torch.Size([512])                |
| 1866    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.v_proj                        | output              | torch.float32 |         | -0.0568344        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([256, 2, 512])        |
| 1867    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | input_0             | torch.float32 |         | -15.7925167       | 14.4610510       | 0.0451898      | 12.1617727            | torch.Size([512, 2, 512])        |
| 1867    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | output              | torch.float32 |         | -15.7925167       | 14.4610510       | 0.0451898      | 12.1617727            | torch.Size([512, 16, 64])        |
| 1868    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -15.7925167       | 14.4610510       | 0.0451898      | 12.1617727            | torch.Size([512, 16, 64])        |
| 1868    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -15.7925167       | 14.4610510       | 0.0451898      | 12.1617727            | torch.Size([16, 512, 64])        |
| 1869    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | input_0             | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([256, 2, 512])        |
| 1869    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | output              | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([256, 16, 64])        |
| 1870    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([256, 16, 64])        |
| 1870    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([16, 256, 64])        |
| 1871    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | input_0             | torch.float32 |         | -0.0568344        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([256, 2, 512])        |
| 1871    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | output              | torch.float32 |         | -0.0568344        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([256, 16, 64])        |
| 1872    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -0.0568344        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([256, 16, 64])        |
| 1872    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -0.0568344        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([16, 256, 64])        |
| 1873    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.21.attn.q_scale_mul                   | input_0             | torch.float32 |         | -15.7925167       | 14.4610510       | 0.0451898      | 12.1617727            | torch.Size([16, 512, 64])        |
| 1873    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.21.attn.q_scale_mul                   | output              | torch.float32 |         | -1.9740646        | 1.8076314        | 0.0056487      | 0.1900277             | torch.Size([16, 512, 64])        |
| 1874    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([16, 256, 64])        |
| 1874    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([16, 64, 256])        |
| 1875    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.21.attn.matmul                        | input_0             | torch.float32 |         | -1.9740646        | 1.8076314        | 0.0056487      | 0.1900277             | torch.Size([16, 512, 64])        |
| 1875    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.21.attn.matmul                        | input_1             | torch.float32 |         | -4.8847404        | 5.0299459        | 0.1016060      | 3.7629018             | torch.Size([16, 64, 256])        |
| 1875    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.21.attn.matmul                        | output              | torch.float32 |         | -94.3492889       | 81.7532043       | -4.9739466     | 674.5888062           | torch.Size([16, 512, 256])       |
| 1876    | torch.Tensor.max                                                                  | head.layers.21.attn.softmax                       | input               | torch.float32 |         | -94.3492889       | 81.7532043       | -4.9739466     | 674.5888062           | torch.Size([16, 512, 256])       |
| 1876    | torch.Tensor.max                                                                  | head.layers.21.attn.softmax                       | output_0            | torch.float32 |         | -94.3492889       | 81.7532043       | -4.9739466     | 674.6708374           | torch.Size([16, 512, 1])         |
| 1876    | torch.Tensor.max                                                                  | head.layers.21.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1877    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.21.attn.softmax.sub                   | input_0             | torch.float32 |         | -94.3492889       | 81.7532043       | -4.9739466     | 674.5888062           | torch.Size([16, 512, 256])       |
| 1877    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.21.attn.softmax.sub                   | input_1             | torch.float32 |         | -94.3492889       | 81.7532043       | -4.9739466     | 674.6708374           | torch.Size([16, 512, 1])         |
| 1877    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.21.attn.softmax.sub                   | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1878    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.21.attn.softmax.exp                   | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1878    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.21.attn.softmax.exp                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1879    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.21.attn.softmax.sum                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1879    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.21.attn.softmax.sum                   | output              | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 1880    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.21.attn.softmax.reciprocal            | input               | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 1880    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.21.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1881    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.21.attn.softmax.mul                   | input_0             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1881    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.21.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 1881    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.21.attn.softmax.mul                   | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1882    | torch.nn.modules.dropout.Dropout                                                  | head.layers.21.attn.attention_drop                | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1882    | torch.nn.modules.dropout.Dropout                                                  | head.layers.21.attn.attention_drop                | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1883    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.21.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1883    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.21.attn.attn_matmul                   | input_1             | torch.float32 |         | -0.0568344        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([16, 256, 64])        |
| 1883    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.21.attn.attn_matmul                   | output              | torch.float32 |         | -0.0568343        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([16, 512, 64])        |
| 1884    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -0.0568343        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([16, 512, 64])        |
| 1884    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -0.0568343        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([512, 16, 64])        |
| 1885    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | input_0             | torch.float32 |         | -0.0568343        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([512, 16, 64])        |
| 1885    | torch.Tensor.reshape                                                              | head.layers.21.attn                               | output              | torch.float32 |         | -0.0568343        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([512, 2, 512])        |
| 1886    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.out_proj                      | input               | torch.float32 |         | -0.0568343        | 0.0711433        | 0.0019992      | 0.0005079             | torch.Size([512, 2, 512])        |
| 1886    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.out_proj                      | weight              | torch.float32 |         | -0.1928206        | 0.1779369        | -0.0001203     | 0.0022082             | torch.Size([512, 512])           |
| 1886    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.out_proj                      | bias                | torch.float32 |         | -0.2257318        | 0.2060668        | 0.0074249      | 0.0055845             | torch.Size([512])                |
| 1886    | torch.nn.modules.linear.Linear                                                    | head.layers.21.attn.out_proj                      | output              | torch.float32 |         | -0.3367843        | 0.3074320        | 0.0132861      | 0.0124201             | torch.Size([512, 2, 512])        |
| 1887    | torch.Tensor.view                                                                 | head.layers.21.attn                               | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 1887    | torch.Tensor.view                                                                 | head.layers.21.attn                               | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 1888    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.21.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 1888    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.21.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 512, 256])        |
| 1889    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | input_0             | torch.float32 |         | -0.3367843        | 0.3074320        | 0.0132861      | 0.0124201             | torch.Size([512, 2, 512])        |
| 1889    | torch.Tensor.transpose                                                            | head.layers.21.attn                               | output              | torch.float32 |         | -0.3367843        | 0.3074320        | 0.0132861      | 0.0124201             | torch.Size([2, 512, 512])        |
| 1890    | torch.nn.modules.dropout.Dropout                                                  | head.layers.21.dropout                            | input               | torch.float32 |         | -0.3367843        | 0.3074320        | 0.0132861      | 0.0124201             | torch.Size([2, 512, 512])        |
| 1890    | torch.nn.modules.dropout.Dropout                                                  | head.layers.21.dropout                            | output              | torch.float32 |         | -0.3367843        | 0.3074320        | 0.0132861      | 0.0124201             | torch.Size([2, 512, 512])        |
| 1891    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.21.add                                | input_0             | torch.float32 |         | -4.3537626        | 7.4040437        | 0.0287078      | 0.8121301             | torch.Size([2, 512, 512])        |
| 1891    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.21.add                                | input_1             | torch.float32 |         | -0.3367843        | 0.3074320        | 0.0132861      | 0.0124201             | torch.Size([2, 512, 512])        |
| 1891    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.21.add                                | output              | torch.float32 |         | -4.3125963        | 7.2329187        | 0.0419939      | 0.7769124             | torch.Size([2, 512, 512])        |
| 1892    | torch.nn.modules.linear.Linear                                                    | head.fc_after(6)                                  | input               | torch.float32 |         | -4.3125963        | 7.2329187        | 0.0419939      | 0.7769124             | torch.Size([2, 512, 512])        |
| 1892    | torch.nn.modules.linear.Linear                                                    | head.fc_after(6)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 1892    | torch.nn.modules.linear.Linear                                                    | head.fc_after(6)                                  | output              | torch.float32 |         | -6.7732830        | 5.7424440        | 0.0352198      | 0.8505983             | torch.Size([2, 512, 256])        |
| 1893    | torch.nn.modules.linear.Linear                                                    | head.fc_before(7)                                 | input               | torch.float32 |         | -6.7732830        | 5.7424440        | 0.0352198      | 0.8505983             | torch.Size([2, 512, 256])        |
| 1893    | torch.nn.modules.linear.Linear                                                    | head.fc_before(7)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 1893    | torch.nn.modules.linear.Linear                                                    | head.fc_before(7)                                 | output              | torch.float32 |         | -3.0022862        | 3.2203870        | 0.0003706      | 0.0537094             | torch.Size([2, 512, 512])        |
| 1894    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.22.query_cat                          | input_0             | torch.float32 |         | -6.7732830        | 5.7424440        | 0.0352198      | 0.8505983             | torch.Size([2, 512, 256])        |
| 1894    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.22.query_cat                          | input_1             | torch.float32 |         | -1.7142470        | 7.4040437        | 0.0533052      | 0.8556479             | torch.Size([2, 512, 256])        |
| 1894    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.22.query_cat                          | output              | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([2, 512, 512])        |
| 1895    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.22.key_cat                            | input_0             | torch.float32 |         | -6.7732830        | 5.7424440        | 0.0352198      | 0.8505983             | torch.Size([2, 512, 256])        |
| 1895    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.22.key_cat                            | input_1             | torch.float32 |         | -1.7142470        | 7.4040437        | 0.0533052      | 0.8556479             | torch.Size([2, 512, 256])        |
| 1895    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.22.key_cat                            | output              | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([2, 512, 512])        |
| 1896    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([2, 512, 512])        |
| 1896    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1897    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([2, 512, 512])        |
| 1897    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1898    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -3.0022862        | 3.2203870        | 0.0003706      | 0.0537094             | torch.Size([2, 512, 512])        |
| 1898    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -3.0022862        | 3.2203870        | 0.0003706      | 0.0537094             | torch.Size([512, 2, 512])        |
| 1899    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | input_0             | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1899    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | output              | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1900    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | input_0             | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1900    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | output              | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1901    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | input_0             | torch.float32 |         | -3.0022862        | 3.2203870        | 0.0003706      | 0.0537094             | torch.Size([512, 2, 512])        |
| 1901    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | output              | torch.float32 |         | -3.0022862        | 3.2203870        | 0.0003706      | 0.0537094             | torch.Size([512, 2, 512])        |
| 1902    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.q_proj                        | input               | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1902    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.q_proj                        | weight              | torch.float32 |         | -0.2868485        | 0.3352289        | -0.0001518     | 0.0026820             | torch.Size([512, 512])           |
| 1902    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.q_proj                        | bias                | torch.float32 |         | -0.0801667        | 0.0727894        | 0.0005583      | 0.0005112             | torch.Size([512])                |
| 1902    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.q_proj                        | output              | torch.float32 |         | -10.7484541       | 9.5823679        | -0.0208879     | 4.8322954             | torch.Size([512, 2, 512])        |
| 1903    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.k_proj                        | input               | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([512, 2, 512])        |
| 1903    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.k_proj                        | weight              | torch.float32 |         | -0.5697392        | 0.5493896        | -0.0000795     | 0.0032088             | torch.Size([512, 512])           |
| 1903    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.k_proj                        | bias                | torch.float32 |         | -0.0280499        | 0.0381052        | -0.0003095     | 0.0000538             | torch.Size([512])                |
| 1903    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.k_proj                        | output              | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([512, 2, 512])        |
| 1904    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.v_proj                        | input               | torch.float32 |         | -3.0022862        | 3.2203870        | 0.0003706      | 0.0537094             | torch.Size([512, 2, 512])        |
| 1904    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.v_proj                        | weight              | torch.float32 |         | -0.2083604        | 0.2150452        | -0.0000953     | 0.0016115             | torch.Size([512, 512])           |
| 1904    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.v_proj                        | bias                | torch.float32 |         | -0.3051279        | 0.2680113        | 0.0025552      | 0.0078078             | torch.Size([512])                |
| 1904    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.v_proj                        | output              | torch.float32 |         | -2.9027326        | 2.9005668        | 0.0230236      | 0.1447900             | torch.Size([512, 2, 512])        |
| 1905    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | input_0             | torch.float32 |         | -10.7484541       | 9.5823679        | -0.0208879     | 4.8322954             | torch.Size([512, 2, 512])        |
| 1905    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | output              | torch.float32 |         | -10.7484541       | 9.5823679        | -0.0208879     | 4.8322954             | torch.Size([512, 16, 64])        |
| 1906    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -10.7484541       | 9.5823679        | -0.0208879     | 4.8322954             | torch.Size([512, 16, 64])        |
| 1906    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -10.7484541       | 9.5823679        | -0.0208879     | 4.8322954             | torch.Size([16, 512, 64])        |
| 1907    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | input_0             | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([512, 2, 512])        |
| 1907    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | output              | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([512, 16, 64])        |
| 1908    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([512, 16, 64])        |
| 1908    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([16, 512, 64])        |
| 1909    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | input_0             | torch.float32 |         | -2.9027326        | 2.9005668        | 0.0230236      | 0.1447900             | torch.Size([512, 2, 512])        |
| 1909    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | output              | torch.float32 |         | -2.9027326        | 2.9005668        | 0.0230236      | 0.1447900             | torch.Size([512, 16, 64])        |
| 1910    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -2.9027326        | 2.9005668        | 0.0230236      | 0.1447900             | torch.Size([512, 16, 64])        |
| 1910    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -2.9027326        | 2.9005668        | 0.0230236      | 0.1447900             | torch.Size([16, 512, 64])        |
| 1911    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.22.attn.q_scale_mul                   | input_0             | torch.float32 |         | -10.7484541       | 9.5823679        | -0.0208879     | 4.8322954             | torch.Size([16, 512, 64])        |
| 1911    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.22.attn.q_scale_mul                   | output              | torch.float32 |         | -1.3435568        | 1.1977960        | -0.0026110     | 0.0755046             | torch.Size([16, 512, 64])        |
| 1912    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([16, 512, 64])        |
| 1912    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([16, 64, 512])        |
| 1913    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.22.attn.matmul                        | input_0             | torch.float32 |         | -1.3435568        | 1.1977960        | -0.0026110     | 0.0755046             | torch.Size([16, 512, 64])        |
| 1913    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.22.attn.matmul                        | input_1             | torch.float32 |         | -14.6461535       | 13.8774071       | -0.0208756     | 6.3993802             | torch.Size([16, 64, 512])        |
| 1913    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.22.attn.matmul                        | output              | torch.float32 |         | -85.4424057       | 110.3384552      | -0.7493385     | 320.4293518           | torch.Size([16, 512, 512])       |
| 1914    | torch.Tensor.max                                                                  | head.layers.22.attn.softmax                       | input               | torch.float32 |         | -85.4424057       | 110.3384552      | -0.7493385     | 320.4293518           | torch.Size([16, 512, 512])       |
| 1914    | torch.Tensor.max                                                                  | head.layers.22.attn.softmax                       | output_0            | torch.float32 |         | 4.8899727         | 110.3384552      | 33.4147034     | 432.0902405           | torch.Size([16, 512, 1])         |
| 1914    | torch.Tensor.max                                                                  | head.layers.22.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 511.0000000      | 306.4377441    | 16204.9023438         | torch.Size([16, 512, 1])         |
| 1915    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.22.attn.softmax.sub                   | input_0             | torch.float32 |         | -85.4424057       | 110.3384552      | -0.7493385     | 320.4293518           | torch.Size([16, 512, 512])       |
| 1915    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.22.attn.softmax.sub                   | input_1             | torch.float32 |         | 4.8899727         | 110.3384552      | 33.4147034     | 432.0902405           | torch.Size([16, 512, 1])         |
| 1915    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.22.attn.softmax.sub                   | output              | torch.float32 |         | -168.4913788      | 0.0000000        | -34.1640396    | 719.7138672           | torch.Size([16, 512, 512])       |
| 1916    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.22.attn.softmax.exp                   | input               | torch.float32 |         | -168.4913788      | 0.0000000        | -34.1640396    | 719.7138672           | torch.Size([16, 512, 512])       |
| 1916    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.22.attn.softmax.exp                   | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0218255      | 0.0171865             | torch.Size([16, 512, 512])       |
| 1917    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.22.attn.softmax.sum                   | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0218255      | 0.0171865             | torch.Size([16, 512, 512])       |
| 1917    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.22.attn.softmax.sum                   | output              | torch.float32 |         | 1.0000008         | 132.1687164      | 11.1746559     | 797.1243286           | torch.Size([16, 512, 1])         |
| 1918    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.22.attn.softmax.reciprocal            | input               | torch.float32 |         | 1.0000008         | 132.1687164      | 11.1746559     | 797.1243286           | torch.Size([16, 512, 1])         |
| 1918    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.22.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0075661         | 0.9999992        | 0.3515930      | 0.0621426             | torch.Size([16, 512, 1])         |
| 1919    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.22.attn.softmax.mul                   | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0218255      | 0.0171865             | torch.Size([16, 512, 512])       |
| 1919    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.22.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0075661         | 0.9999992        | 0.3515930      | 0.0621426             | torch.Size([16, 512, 1])         |
| 1919    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.22.attn.softmax.mul                   | output              | torch.float32 |         | 0.0000000         | 0.9999992        | 0.0019531      | 0.0004656             | torch.Size([16, 512, 512])       |
| 1920    | torch.nn.modules.dropout.Dropout                                                  | head.layers.22.attn.attention_drop                | input               | torch.float32 |         | 0.0000000         | 0.9999992        | 0.0019531      | 0.0004656             | torch.Size([16, 512, 512])       |
| 1920    | torch.nn.modules.dropout.Dropout                                                  | head.layers.22.attn.attention_drop                | output              | torch.float32 |         | 0.0000000         | 0.9999992        | 0.0019531      | 0.0004656             | torch.Size([16, 512, 512])       |
| 1921    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.22.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0000000         | 0.9999992        | 0.0019531      | 0.0004656             | torch.Size([16, 512, 512])       |
| 1921    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.22.attn.attn_matmul                   | input_1             | torch.float32 |         | -2.9027326        | 2.9005668        | 0.0230236      | 0.1447900             | torch.Size([16, 512, 64])        |
| 1921    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.22.attn.attn_matmul                   | output              | torch.float32 |         | -2.5733821        | 2.3953395        | 0.0125123      | 0.1161668             | torch.Size([16, 512, 64])        |
| 1922    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -2.5733821        | 2.3953395        | 0.0125123      | 0.1161668             | torch.Size([16, 512, 64])        |
| 1922    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -2.5733821        | 2.3953395        | 0.0125123      | 0.1161668             | torch.Size([512, 16, 64])        |
| 1923    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | input_0             | torch.float32 |         | -2.5733821        | 2.3953395        | 0.0125123      | 0.1161668             | torch.Size([512, 16, 64])        |
| 1923    | torch.Tensor.reshape                                                              | head.layers.22.attn                               | output              | torch.float32 |         | -2.5733821        | 2.3953395        | 0.0125123      | 0.1161668             | torch.Size([512, 2, 512])        |
| 1924    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.out_proj                      | input               | torch.float32 |         | -2.5733821        | 2.3953395        | 0.0125123      | 0.1161668             | torch.Size([512, 2, 512])        |
| 1924    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.out_proj                      | weight              | torch.float32 |         | -0.2679534        | 0.2460409        | 0.0001218      | 0.0026792             | torch.Size([512, 512])           |
| 1924    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.out_proj                      | bias                | torch.float32 |         | -0.3912482        | 0.3744041        | -0.0041605     | 0.0237935             | torch.Size([512])                |
| 1924    | torch.nn.modules.linear.Linear                                                    | head.layers.22.attn.out_proj                      | output              | torch.float32 |         | -3.3205473        | 3.2061818        | -0.0138700     | 0.6116813             | torch.Size([512, 2, 512])        |
| 1925    | torch.Tensor.view                                                                 | head.layers.22.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.9999992        | 0.0019531      | 0.0004656             | torch.Size([16, 512, 512])       |
| 1925    | torch.Tensor.view                                                                 | head.layers.22.attn                               | output              | torch.float32 |         | 0.0000000         | 0.9999992        | 0.0019531      | 0.0004656             | torch.Size([2, 8, 512, 512])     |
| 1926    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.22.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0000000         | 0.9999992        | 0.0019531      | 0.0004656             | torch.Size([2, 8, 512, 512])     |
| 1926    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.22.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0000000         | 0.2213060        | 0.0019531      | 0.0000700             | torch.Size([2, 512, 512])        |
| 1927    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | input_0             | torch.float32 |         | -3.3205473        | 3.2061818        | -0.0138700     | 0.6116813             | torch.Size([512, 2, 512])        |
| 1927    | torch.Tensor.transpose                                                            | head.layers.22.attn                               | output              | torch.float32 |         | -3.3205473        | 3.2061818        | -0.0138700     | 0.6116813             | torch.Size([2, 512, 512])        |
| 1928    | torch.nn.modules.dropout.Dropout                                                  | head.layers.22.dropout                            | input               | torch.float32 |         | -3.3205473        | 3.2061818        | -0.0138700     | 0.6116813             | torch.Size([2, 512, 512])        |
| 1928    | torch.nn.modules.dropout.Dropout                                                  | head.layers.22.dropout                            | output              | torch.float32 |         | -3.3205473        | 3.2061818        | -0.0138700     | 0.6116813             | torch.Size([2, 512, 512])        |
| 1929    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.22.add                                | input_0             | torch.float32 |         | -6.7732830        | 7.4040437        | 0.0442625      | 0.8532032             | torch.Size([2, 512, 512])        |
| 1929    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.22.add                                | input_1             | torch.float32 |         | -3.3205473        | 3.2061818        | -0.0138700     | 0.6116813             | torch.Size([2, 512, 512])        |
| 1929    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.22.add                                | output              | torch.float32 |         | -8.4558220        | 8.1789837        | 0.0303925      | 1.4702841             | torch.Size([2, 512, 512])        |
| 1930    | torch.nn.modules.linear.Linear                                                    | head.fc_after(7)                                  | input               | torch.float32 |         | -8.4558220        | 8.1789837        | 0.0303925      | 1.4702841             | torch.Size([2, 512, 512])        |
| 1930    | torch.nn.modules.linear.Linear                                                    | head.fc_after(7)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 1930    | torch.nn.modules.linear.Linear                                                    | head.fc_after(7)                                  | output              | torch.float32 |         | -53.2941933       | 42.3362846       | 0.0480828      | 25.0999603            | torch.Size([2, 512, 256])        |
| 1931    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.23.input_mean.mean                    | input_0             | torch.float32 |         | -53.2941933       | 42.3362846       | 0.0480828      | 25.0999603            | torch.Size([2, 512, 256])        |
| 1931    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.23.input_mean.mean                    | output              | torch.float32 |         | -0.0757820        | 0.1899408        | 0.0480828      | 0.0022849             | torch.Size([2, 512, 1])          |
| 1932    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.23.sub                                | input_0             | torch.float32 |         | -53.2941933       | 42.3362846       | 0.0480828      | 25.0999603            | torch.Size([2, 512, 256])        |
| 1932    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.23.sub                                | input_1             | torch.float32 |         | -0.0757820        | 0.1899408        | 0.0480828      | 0.0022849             | torch.Size([2, 512, 1])          |
| 1932    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.23.sub                                | output              | torch.float32 |         | -53.3180580       | 42.2807655       | 0.0000000      | 25.0976791            | torch.Size([2, 512, 256])        |
| 1933    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.mul                                | input_0             | torch.float32 |         | -53.3180580       | 42.2807655       | 0.0000000      | 25.0976791            | torch.Size([2, 512, 256])        |
| 1933    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.mul                                | input_1             | torch.float32 |         | -53.3180580       | 42.2807655       | 0.0000000      | 25.0976791            | torch.Size([2, 512, 256])        |
| 1933    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.mul                                | output              | torch.float32 |         | 0.0000000         | 2842.8154297     | 25.0975838     | 22819.0332031         | torch.Size([2, 512, 256])        |
| 1934    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.23.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 2842.8154297     | 25.0975838     | 22819.0332031         | torch.Size([2, 512, 256])        |
| 1934    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.23.var_mean.mean                      | output              | torch.float32 |         | 8.1463766         | 47.9012337       | 25.0975838     | 110.8731232           | torch.Size([2, 512, 1])          |
| 1935    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.23.rsqrt                              | input               | torch.float32 |         | 8.1463766         | 47.9012337       | 25.0975838     | 110.8731232           | torch.Size([2, 512, 1])          |
| 1935    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.23.rsqrt                              | output              | torch.float32 |         | 0.1444863         | 0.3503624        | 0.2204659      | 0.0042715             | torch.Size([2, 512, 1])          |
| 1936    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.out_mul                            | input_0             | torch.float32 |         | -53.3180580       | 42.2807655       | 0.0000000      | 25.0976791            | torch.Size([2, 512, 256])        |
| 1936    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.out_mul                            | input_1             | torch.float32 |         | 0.1444863         | 0.3503624        | 0.2204659      | 0.0042715             | torch.Size([2, 512, 1])          |
| 1936    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.out_mul                            | output              | torch.float32 |         | -8.3495903        | 6.4533510        | 0.0000000      | 1.0000032             | torch.Size([2, 512, 256])        |
| 1937    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.23.weight_quant                       | input               | torch.float32 |         | 0.7844438         | 1.0446960        | 0.8969574      | 0.0022063             | torch.Size([256])                |
| 1937    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.23.weight_quant                       | output              | torch.float32 |         | 0.7844438         | 1.0446960        | 0.8969574      | 0.0022063             | torch.Size([256])                |
| 1938    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.weight_mul                         | input_0             | torch.float32 |         | -8.3495903        | 6.4533510        | 0.0000000      | 1.0000032             | torch.Size([2, 512, 256])        |
| 1938    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.weight_mul                         | input_1             | torch.float32 |         | 0.7844438         | 1.0446960        | 0.8969574      | 0.0022063             | torch.Size([256])                |
| 1938    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.23.weight_mul                         | output              | torch.float32 |         | -6.6993146        | 5.6449304        | 0.0024599      | 0.7481129             | torch.Size([2, 512, 256])        |
| 1939    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.23.bias_quant                         | input               | torch.float32 |         | -0.1350660        | 0.1619885        | 0.0027300      | 0.0011589             | torch.Size([256])                |
| 1939    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.23.bias_quant                         | output              | torch.float32 |         | -0.1350660        | 0.1619885        | 0.0027300      | 0.0011589             | torch.Size([256])                |
| 1940    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.23.bias_add                           | input_0             | torch.float32 |         | -6.6993146        | 5.6449304        | 0.0024599      | 0.7481129             | torch.Size([2, 512, 256])        |
| 1940    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.23.bias_add                           | input_1             | torch.float32 |         | -0.1350660        | 0.1619885        | 0.0027300      | 0.0011589             | torch.Size([256])                |
| 1940    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.23.bias_add                           | output              | torch.float32 |         | -6.5373259        | 5.5762610        | 0.0051899      | 0.7193509             | torch.Size([2, 512, 256])        |
| 1941    | torch.nn.modules.linear.Linear                                                    | head.layers.24.kps_generator.offset               | input               | torch.float32 |         | -6.5373259        | 5.5762610        | 0.0051899      | 0.7193509             | torch.Size([2, 512, 256])        |
| 1941    | torch.nn.modules.linear.Linear                                                    | head.layers.24.kps_generator.offset               | weight              | torch.float32 |         | -0.4079330        | 0.3764863        | -0.0009719     | 0.0062766             | torch.Size([24, 256])            |
| 1941    | torch.nn.modules.linear.Linear                                                    | head.layers.24.kps_generator.offset               | bias                | torch.float32 |         | -0.1728180        | 0.0862914        | -0.0105869     | 0.0040706             | torch.Size([24])                 |
| 1941    | torch.nn.modules.linear.Linear                                                    | head.layers.24.kps_generator.offset               | output              | torch.float32 |         | -14.0329199       | 8.2216978        | -0.4509866     | 10.5095615            | torch.Size([2, 512, 24])         |
| 1942    | torch.Tensor.view                                                                 | head.layers.24.kps_generator                      | input_0             | torch.float32 |         | -14.0329199       | 8.2216978        | -0.4509866     | 10.5095615            | torch.Size([2, 512, 24])         |
| 1942    | torch.Tensor.view                                                                 | head.layers.24.kps_generator                      | output              | torch.float32 |         | -14.0329199       | 8.2216978        | -0.4509866     | 10.5095615            | torch.Size([2, 512, 8, 3])       |
| 1943    | torch.Tensor.__getitem__                                                          | head.layers.24.kps_generator                      | input_0             | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 1943    | torch.Tensor.__getitem__                                                          | head.layers.24.kps_generator                      | output              | torch.float32 |         | -53.5869904       | 53.6926079       | 0.7796978      | 289.6354675           | torch.Size([2, 512, 1, 3])       |
| 1944    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.kps_generator.keypoints_add        | input_0             | torch.float32 |         | -14.0329199       | 8.2216978        | -0.4509866     | 10.5095615            | torch.Size([2, 512, 8, 3])       |
| 1944    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.kps_generator.keypoints_add        | input_1             | torch.float32 |         | -53.5869904       | 53.6926079       | 0.7796978      | 289.6354675           | torch.Size([2, 512, 1, 3])       |
| 1944    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.kps_generator.keypoints_add        | output              | torch.float32 |         | -60.1601257       | 60.0180740       | 0.3287112      | 299.4942932           | torch.Size([2, 512, 8, 3])       |
| 1945    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.weight_add                         | input_0             | torch.float32 |         | -6.5373259        | 5.5762610        | 0.0051899      | 0.7193509             | torch.Size([2, 512, 256])        |
| 1945    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.weight_add                         | input_1             | torch.float32 |         | -1.7142470        | 7.4040437        | 0.0533052      | 0.8556479             | torch.Size([2, 512, 256])        |
| 1945    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.weight_add                         | output              | torch.float32 |         | -6.9255452        | 7.7636490        | 0.0584951      | 1.5101061             | torch.Size([2, 512, 256])        |
| 1946    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 1946    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 1947    | torch.Tensor.reshape                                                              | head.layers.24                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 1947    | torch.Tensor.reshape                                                              | head.layers.24                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 1948    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.0                   | input               | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 1948    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.0                   | weight              | torch.float32 |         | -0.7857405        | 0.6352730        | 0.0006263      | 0.0174991             | torch.Size([256, 12])            |
| 1948    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.0                   | bias                | torch.float32 |         | -0.3248905        | 0.3380931        | 0.0039869      | 0.0290271             | torch.Size([256])                |
| 1948    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.0                   | output              | torch.float32 |         | -1.1150846        | 1.3106909        | -0.0378232     | 0.1963174             | torch.Size([2, 6, 256])          |
| 1949    | torch.nn.modules.activation.ReLU                                                  | head.layers.24.camera_encoder.1                   | input               | torch.float32 |         | 0.0000000         | 1.3106909        | 0.1688972      | 0.0610160             | torch.Size([2, 6, 256])          |
| 1949    | torch.nn.modules.activation.ReLU                                                  | head.layers.24.camera_encoder.1                   | output              | torch.float32 |         | 0.0000000         | 1.3106909        | 0.1688972      | 0.0610160             | torch.Size([2, 6, 256])          |
| 1950    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 1.3106909        | 0.1688972      | 0.0610160             | torch.Size([2, 6, 256])          |
| 1950    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.input_mean.mean   | output              | torch.float32 |         | 0.1171157         | 0.1850860        | 0.1688972      | 0.0005943             | torch.Size([2, 6, 1])            |
| 1951    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.2.sub               | input_0             | torch.float32 |         | 0.0000000         | 1.3106909        | 0.1688972      | 0.0610160             | torch.Size([2, 6, 256])          |
| 1951    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.2.sub               | input_1             | torch.float32 |         | 0.1171157         | 0.1850860        | 0.1688972      | 0.0005943             | torch.Size([2, 6, 1])            |
| 1951    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.2.sub               | output              | torch.float32 |         | -0.1850860        | 1.1300470        | 0.0000000      | 0.0604710             | torch.Size([2, 6, 256])          |
| 1952    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.mul               | input_0             | torch.float32 |         | -0.1850860        | 1.1300470        | 0.0000000      | 0.0604710             | torch.Size([2, 6, 256])          |
| 1952    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.mul               | input_1             | torch.float32 |         | -0.1850860        | 1.1300470        | 0.0000000      | 0.0604710             | torch.Size([2, 6, 256])          |
| 1952    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.mul               | output              | torch.float32 |         | 0.0000000         | 1.2770061        | 0.0604513      | 0.0142281             | torch.Size([2, 6, 256])          |
| 1953    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.var_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 1.2770061        | 0.0604513      | 0.0142281             | torch.Size([2, 6, 256])          |
| 1953    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.2.var_mean.mean     | output              | torch.float32 |         | 0.0219955         | 0.0730022        | 0.0604513      | 0.0003261             | torch.Size([2, 6, 1])            |
| 1954    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.24.camera_encoder.2.rsqrt             | input               | torch.float32 |         | 0.0219955         | 0.0730022        | 0.0604513      | 0.0003261             | torch.Size([2, 6, 1])            |
| 1954    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.24.camera_encoder.2.rsqrt             | output              | torch.float32 |         | 3.7008562         | 6.7411599        | 4.3139830      | 1.2641422             | torch.Size([2, 6, 1])            |
| 1955    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.out_mul           | input_0             | torch.float32 |         | -0.1850860        | 1.1300470        | 0.0000000      | 0.0604710             | torch.Size([2, 6, 256])          |
| 1955    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.out_mul           | input_1             | torch.float32 |         | 3.7008562         | 6.7411599        | 4.3139830      | 1.2641422             | torch.Size([2, 6, 1])            |
| 1955    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.out_mul           | output              | torch.float32 |         | -0.7894958        | 4.3795042        | 0.0000000      | 1.0001278             | torch.Size([2, 6, 256])          |
| 1956    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.2.weight_quant      | input               | torch.float32 |         | 0.7170500         | 1.1652156        | 0.9740722      | 0.0055252             | torch.Size([256])                |
| 1956    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.2.weight_quant      | output              | torch.float32 |         | 0.7170500         | 1.1652156        | 0.9740722      | 0.0055252             | torch.Size([256])                |
| 1957    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.weight_mul        | input_0             | torch.float32 |         | -0.7894958        | 4.3795042        | 0.0000000      | 1.0001278             | torch.Size([2, 6, 256])          |
| 1957    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.weight_mul        | input_1             | torch.float32 |         | 0.7170500         | 1.1652156        | 0.9740722      | 0.0055252             | torch.Size([256])                |
| 1957    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.2.weight_mul        | output              | torch.float32 |         | -0.8622141        | 4.3564095        | 0.0092789      | 0.9708663             | torch.Size([2, 6, 256])          |
| 1958    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.2.bias_quant        | input               | torch.float32 |         | -0.0844964        | 0.2250945        | 0.0129729      | 0.0024166             | torch.Size([256])                |
| 1958    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.2.bias_quant        | output              | torch.float32 |         | -0.0844964        | 0.2250945        | 0.0129729      | 0.0024166             | torch.Size([256])                |
| 1959    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.2.bias_add          | input_0             | torch.float32 |         | -0.8622141        | 4.3564095        | 0.0092789      | 0.9708663             | torch.Size([2, 6, 256])          |
| 1959    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.2.bias_add          | input_1             | torch.float32 |         | -0.0844964        | 0.2250945        | 0.0129729      | 0.0024166             | torch.Size([256])                |
| 1959    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.2.bias_add          | output              | torch.float32 |         | -0.9233735        | 4.3962269        | 0.0222518      | 0.9557417             | torch.Size([2, 6, 256])          |
| 1960    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.3                   | input               | torch.float32 |         | -0.9233735        | 4.3962269        | 0.0222518      | 0.9557417             | torch.Size([2, 6, 256])          |
| 1960    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.3                   | weight              | torch.float32 |         | -0.4547428        | 0.4697872        | 0.0003959      | 0.0051907             | torch.Size([256, 256])           |
| 1960    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.3                   | bias                | torch.float32 |         | -0.0825015        | 0.3699438        | -0.0037957     | 0.0022571             | torch.Size([256])                |
| 1960    | torch.nn.modules.linear.Linear                                                    | head.layers.24.camera_encoder.3                   | output              | torch.float32 |         | -13.0310612       | 59.4793243       | -0.5715610     | 26.2541924            | torch.Size([2, 6, 256])          |
| 1961    | torch.nn.modules.activation.ReLU                                                  | head.layers.24.camera_encoder.4                   | input               | torch.float32 |         | 0.0000000         | 59.4793243       | 1.0778677      | 21.1184273            | torch.Size([2, 6, 256])          |
| 1961    | torch.nn.modules.activation.ReLU                                                  | head.layers.24.camera_encoder.4                   | output              | torch.float32 |         | 0.0000000         | 59.4793243       | 1.0778677      | 21.1184273            | torch.Size([2, 6, 256])          |
| 1962    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 59.4793243       | 1.0778677      | 21.1184273            | torch.Size([2, 6, 256])          |
| 1962    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.input_mean.mean   | output              | torch.float32 |         | 1.0592506         | 1.1075730        | 1.0778677      | 0.0002391             | torch.Size([2, 6, 1])            |
| 1963    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.5.sub               | input_0             | torch.float32 |         | 0.0000000         | 59.4793243       | 1.0778677      | 21.1184273            | torch.Size([2, 6, 256])          |
| 1963    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.5.sub               | input_1             | torch.float32 |         | 1.0592506         | 1.1075730        | 1.0778677      | 0.0002391             | torch.Size([2, 6, 1])            |
| 1963    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.camera_encoder.5.sub               | output              | torch.float32 |         | -1.1075730        | 58.3994293       | -0.0000001     | 21.1182079            | torch.Size([2, 6, 256])          |
| 1964    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.mul               | input_0             | torch.float32 |         | -1.1075730        | 58.3994293       | -0.0000001     | 21.1182079            | torch.Size([2, 6, 256])          |
| 1964    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.mul               | input_1             | torch.float32 |         | -1.1075730        | 58.3994293       | -0.0000001     | 21.1182079            | torch.Size([2, 6, 256])          |
| 1964    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.mul               | output              | torch.float32 |         | 0.0000069         | 3410.4934082     | 21.1113358     | 41001.4843750         | torch.Size([2, 6, 256])          |
| 1965    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.var_mean.mean     | input_0             | torch.float32 |         | 0.0000069         | 3410.4934082     | 21.1113358     | 41001.4843750         | torch.Size([2, 6, 256])          |
| 1965    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.24.camera_encoder.5.var_mean.mean     | output              | torch.float32 |         | 19.7622147        | 23.1240063       | 21.1113358     | 1.0787516             | torch.Size([2, 6, 1])            |
| 1966    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.24.camera_encoder.5.rsqrt             | input               | torch.float32 |         | 19.7622147        | 23.1240063       | 21.1113358     | 1.0787516             | torch.Size([2, 6, 1])            |
| 1966    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.24.camera_encoder.5.rsqrt             | output              | torch.float32 |         | 0.2079545         | 0.2249480        | 0.2178179      | 0.0000273             | torch.Size([2, 6, 1])            |
| 1967    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.out_mul           | input_0             | torch.float32 |         | -1.1075730        | 58.3994293       | -0.0000001     | 21.1182079            | torch.Size([2, 6, 256])          |
| 1967    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.out_mul           | input_1             | torch.float32 |         | 0.2079545         | 0.2249480        | 0.2178179      | 0.0000273             | torch.Size([2, 6, 1])            |
| 1967    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.out_mul           | output              | torch.float32 |         | -0.2491463        | 12.3467464       | 0.0000000      | 1.0003251             | torch.Size([2, 6, 256])          |
| 1968    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.5.weight_quant      | input               | torch.float32 |         | 0.4739479         | 1.5194587        | 0.8861445      | 0.0227169             | torch.Size([256])                |
| 1968    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.5.weight_quant      | output              | torch.float32 |         | 0.4739479         | 1.5194587        | 0.8861445      | 0.0227169             | torch.Size([256])                |
| 1969    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.weight_mul        | input_0             | torch.float32 |         | -0.2491463        | 12.3467464       | 0.0000000      | 1.0003251             | torch.Size([2, 6, 256])          |
| 1969    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.weight_mul        | input_1             | torch.float32 |         | 0.4739479         | 1.5194587        | 0.8861445      | 0.0227169             | torch.Size([256])                |
| 1969    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.camera_encoder.5.weight_mul        | output              | torch.float32 |         | -0.3785675        | 7.7113371        | -0.0193481     | 0.5535173             | torch.Size([2, 6, 256])          |
| 1970    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.5.bias_quant        | input               | torch.float32 |         | -0.5851686        | 0.4827383        | 0.0429210      | 0.0232055             | torch.Size([256])                |
| 1970    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.camera_encoder.5.bias_quant        | output              | torch.float32 |         | -0.5851686        | 0.4827383        | 0.0429210      | 0.0232055             | torch.Size([256])                |
| 1971    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.5.bias_add          | input_0             | torch.float32 |         | -0.3785675        | 7.7113371        | -0.0193481     | 0.5535173             | torch.Size([2, 6, 256])          |
| 1971    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.5.bias_add          | input_1             | torch.float32 |         | -0.5851686        | 0.4827383        | 0.0429210      | 0.0232055             | torch.Size([256])                |
| 1971    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.camera_encoder.5.bias_add          | output              | torch.float32 |         | -0.9637361        | 7.3572764        | 0.0235729      | 0.5359664             | torch.Size([2, 6, 256])          |
| 1972    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -6.9255452        | 7.7636490        | 0.0584951      | 1.5101061             | torch.Size([2, 512, 256])        |
| 1972    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -6.9255452        | 7.7636490        | 0.0584951      | 1.5101061             | torch.Size([2, 512, 1, 256])     |
| 1973    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -0.9637361        | 7.3572764        | 0.0235729      | 0.5359664             | torch.Size([2, 6, 256])          |
| 1973    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -0.9637361        | 7.3572764        | 0.0235729      | 0.5359664             | torch.Size([2, 1, 6, 256])       |
| 1974    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.cam_add                            | input_0             | torch.float32 |         | -6.9255452        | 7.7636490        | 0.0584951      | 1.5101061             | torch.Size([2, 512, 1, 256])     |
| 1974    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.cam_add                            | input_1             | torch.float32 |         | -0.9637361        | 7.3572764        | 0.0235729      | 0.5359664             | torch.Size([2, 1, 6, 256])       |
| 1974    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.24.cam_add                            | output              | torch.float32 |         | -6.6400428        | 9.2713232        | 0.0820680      | 1.3970101             | torch.Size([2, 512, 6, 256])     |
| 1975    | torch.nn.modules.linear.Linear                                                    | head.layers.24.weights_fc                         | input               | torch.float32 |         | -6.6400428        | 9.2713232        | 0.0820680      | 1.3970101             | torch.Size([2, 512, 6, 256])     |
| 1975    | torch.nn.modules.linear.Linear                                                    | head.layers.24.weights_fc                         | weight              | torch.float32 |         | -0.3503168        | 0.2480071        | 0.0005745      | 0.0031640             | torch.Size([64, 256])            |
| 1975    | torch.nn.modules.linear.Linear                                                    | head.layers.24.weights_fc                         | bias                | torch.float32 |         | -0.1120743        | 0.0735845        | -0.0091236     | 0.0018223             | torch.Size([64])                 |
| 1975    | torch.nn.modules.linear.Linear                                                    | head.layers.24.weights_fc                         | output              | torch.float32 |         | -9.8977156        | 6.9750295        | -0.3674141     | 5.4765377             | torch.Size([2, 512, 6, 64])      |
| 1976    | torch.Tensor.reshape                                                              | head.layers.24                                    | input_0             | torch.float32 |         | -9.8977156        | 6.9750295        | -0.3674141     | 5.4765377             | torch.Size([2, 512, 6, 64])      |
| 1976    | torch.Tensor.reshape                                                              | head.layers.24                                    | output              | torch.float32 |         | -9.8977156        | 6.9750295        | -0.3674141     | 5.4765377             | torch.Size([2, 512, 48, 8])      |
| 1977    | torch.Tensor.max                                                                  | head.layers.24.weight_softmax                     | input               | torch.float32 |         | -9.8977156        | 6.9750295        | -0.3674141     | 5.4765377             | torch.Size([2, 512, 48, 8])      |
| 1977    | torch.Tensor.max                                                                  | head.layers.24.weight_softmax                     | output_0            | torch.float32 |         | 1.2609210         | 6.9750295        | 3.0020635      | 0.9168800             | torch.Size([2, 512, 1, 8])       |
| 1977    | torch.Tensor.max                                                                  | head.layers.24.weight_softmax                     | output_1            | torch.int64   |         | 3.0000000         | 47.0000000       | 27.5490723     | 141.5068207           | torch.Size([2, 512, 1, 8])       |
| 1978    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.weight_softmax.sub                 | input_0             | torch.float32 |         | -9.8977156        | 6.9750295        | -0.3674141     | 5.4765377             | torch.Size([2, 512, 48, 8])      |
| 1978    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.weight_softmax.sub                 | input_1             | torch.float32 |         | 1.2609210         | 6.9750295        | 3.0020635      | 0.9168800             | torch.Size([2, 512, 1, 8])       |
| 1978    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.24.weight_softmax.sub                 | output              | torch.float32 |         | -12.4532261       | 0.0000000        | -3.3694777     | 5.5460510             | torch.Size([2, 512, 48, 8])      |
| 1979    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.24.weight_softmax.exp                 | input               | torch.float32 |         | -12.4532261       | 0.0000000        | -3.3694777     | 5.5460510             | torch.Size([2, 512, 48, 8])      |
| 1979    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.24.weight_softmax.exp                 | output              | torch.float32 |         | 0.0000039         | 1.0000000        | 0.1903250      | 0.0785640             | torch.Size([2, 512, 48, 8])      |
| 1980    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.24.weight_softmax.sum                 | input               | torch.float32 |         | 0.0000039         | 1.0000000        | 0.1903250      | 0.0785640             | torch.Size([2, 512, 48, 8])      |
| 1980    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.24.weight_softmax.sum                 | output              | torch.float32 |         | 1.7335213         | 21.2442284       | 9.1356010      | 10.2374697            | torch.Size([2, 512, 1, 8])       |
| 1981    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.24.weight_softmax.reciprocal          | input               | torch.float32 |         | 1.7335213         | 21.2442284       | 9.1356010      | 10.2374697            | torch.Size([2, 512, 1, 8])       |
| 1981    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.24.weight_softmax.reciprocal          | output              | torch.float32 |         | 0.0470716         | 0.5768605        | 0.1324788      | 0.0064784             | torch.Size([2, 512, 1, 8])       |
| 1982    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.weight_softmax.mul                 | input_0             | torch.float32 |         | 0.0000039         | 1.0000000        | 0.1903250      | 0.0785640             | torch.Size([2, 512, 48, 8])      |
| 1982    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.weight_softmax.mul                 | input_1             | torch.float32 |         | 0.0470716         | 0.5768605        | 0.1324788      | 0.0064784             | torch.Size([2, 512, 1, 8])       |
| 1982    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.weight_softmax.mul                 | output              | torch.float32 |         | 0.0000007         | 0.5768605        | 0.0208333      | 0.0011102             | torch.Size([2, 512, 48, 8])      |
| 1983    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -60.1601257       | 60.0180740       | 0.3287112      | 299.4942932           | torch.Size([2, 512, 8, 3])       |
| 1983    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -60.1601257       | 51.7250404       | -1.4810002     | 325.4448242           | torch.Size([2, 512, 8, 1])       |
| 1984    | torch.ones_like                                                                   | head.layers.24                                    | input               | torch.float32 |         | -60.1601257       | 51.7250404       | -1.4810002     | 325.4448242           | torch.Size([2, 512, 8, 1])       |
| 1984    | torch.ones_like                                                                   | head.layers.24                                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1985    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.point_quant_stub                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1985    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.24.point_quant_stub                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1986    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.point_cat                          | input_0             | torch.float32 |         | -60.1601257       | 60.0180740       | 0.3287112      | 299.4942932           | torch.Size([2, 512, 8, 3])       |
| 1986    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.point_cat                          | input_1             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 1986    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.point_cat                          | output              | torch.float32 |         | -60.1601257       | 60.0180740       | 0.4965333      | 224.7029114           | torch.Size([2, 512, 8, 4])       |
| 1987    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 1987    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1988    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -60.1601257       | 60.0180740       | 0.4965333      | 224.7029114           | torch.Size([2, 512, 8, 4])       |
| 1988    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -60.1601257       | 60.0180740       | 0.4965333      | 224.7029114           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1989    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.point_matmul                       | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 1989    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.point_matmul                       | input_1             | torch.float32 |         | -60.1601257       | 60.0180740       | 0.4965333      | 224.7029114           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 1989    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.point_matmul                       | output              | torch.float32 |         | -95.2275620       | 88.4391479       | 0.0886731      | 101.0691376           | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1990    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.24.point_sum                          | input               | torch.float32 |         | -95.2275620       | 88.4391479       | 0.0886731      | 101.0691376           | torch.Size([2, 6, 512, 8, 4, 4]) |
| 1990    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.24.point_sum                          | output              | torch.float32 |         | -103.1625519      | 100.7714767      | 0.3546925      | 398.9231567           | torch.Size([2, 6, 512, 8, 4])    |
| 1991    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -103.1625519      | 100.7714767      | 0.3546925      | 398.9231567           | torch.Size([2, 6, 512, 8, 4])    |
| 1991    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -62.8774681       | 64.1424789       | -0.4595750     | 443.4212036           | torch.Size([2, 6, 512, 8, 1])    |
| 1992    | torch.clamp                                                                       | head.layers.24                                    | input               | torch.float32 |         | -62.8774681       | 64.1424789       | -0.4595750     | 443.4212036           | torch.Size([2, 6, 512, 8, 1])    |
| 1992    | torch.clamp                                                                       | head.layers.24                                    | output              | torch.float32 |         | 0.0000100         | 64.1424789       | 7.6238775      | 157.7985382           | torch.Size([2, 6, 512, 8, 1])    |
| 1993    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.24.reciprocal_op                      | input               | torch.float32 |         | 0.0000100         | 64.1424789       | 7.6238775      | 157.7985382           | torch.Size([2, 6, 512, 8, 1])    |
| 1993    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.24.reciprocal_op                      | output              | torch.float32 |         | 0.0155903         | 100000.0000000   | 53983.9921875  | 2484134912.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 1994    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | -103.1625519      | 100.7714767      | 0.3546925      | 398.9231567           | torch.Size([2, 6, 512, 8, 4])    |
| 1994    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | -103.1625519      | 100.7714767      | 0.4391725      | 575.5950928           | torch.Size([2, 6, 512, 8, 2])    |
| 1995    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.point_mul                          | input_0             | torch.float32 |         | -103.1625519      | 100.7714767      | 0.4391725      | 575.5950928           | torch.Size([2, 6, 512, 8, 2])    |
| 1995    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.point_mul                          | input_1             | torch.float32 |         | 0.0155903         | 100000.0000000   | 53983.9921875  | 2484134912.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 1995    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.point_mul                          | output              | torch.float32 |         | -10316255.0000000 | 9482159.0000000  | 176550.2812500 | 2847163809792.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 1996    | torch.Tensor.flatten                                                              | head.layers.24                                    | input               | torch.float32 |         | -10316255.0000000 | 9482159.0000000  | 176550.2812500 | 2847163809792.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 1996    | torch.Tensor.flatten                                                              | head.layers.24                                    | output              | torch.float32 |         | -10316255.0000000 | 9482159.0000000  | 176550.2812500 | 2847163809792.0000000 | torch.Size([12, 512, 8, 2])      |
| 1997    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.24                                    | input_0             | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 1997    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.24                                    | input_1             | torch.float32 |         | -10316255.0000000 | 9482159.0000000  | 176550.2812500 | 2847163809792.0000000 | torch.Size([12, 512, 8, 2])      |
| 1997    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.24                                    | output              | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([12, 256, 512, 8])    |
| 1998    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.feat_cat                           | input               | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([12, 256, 512, 8])    |
| 1998    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.feat_cat                           | output              | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([12, 256, 512, 8])    |
| 1999    | torch.Tensor.view                                                                 | head.layers.24                                    | input_0             | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([12, 256, 512, 8])    |
| 1999    | torch.Tensor.view                                                                 | head.layers.24                                    | output              | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([2, 6, 256, 512, 8])  |
| 2000    | torch.Tensor.permute                                                              | head.layers.24                                    | input_0             | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([2, 6, 256, 512, 8])  |
| 2000    | torch.Tensor.permute                                                              | head.layers.24                                    | output              | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([2, 512, 6, 8, 256])  |
| 2001    | torch.Tensor.contiguous                                                           | head.layers.24                                    | input               | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822770             | torch.Size([2, 512, 6, 8, 256])  |
| 2001    | torch.Tensor.contiguous                                                           | head.layers.24                                    | output              | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822775             | torch.Size([2, 512, 6, 8, 256])  |
| 2002    | torch.Tensor.view                                                                 | head.layers.24                                    | input_0             | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822775             | torch.Size([2, 512, 6, 8, 256])  |
| 2002    | torch.Tensor.view                                                                 | head.layers.24                                    | output              | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822775             | torch.Size([2, 512, 48, 256])    |
| 2003    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | input_0             | torch.float32 |         | 0.0000007         | 0.5768605        | 0.0208333      | 0.0011102             | torch.Size([2, 512, 48, 8])      |
| 2003    | torch.Tensor.__getitem__                                                          | head.layers.24                                    | output              | torch.float32 |         | 0.0000007         | 0.5768605        | 0.0208333      | 0.0011102             | torch.Size([2, 512, 48, 8, 1])   |
| 2004    | torch.Tensor.reshape                                                              | head.layers.24                                    | input_0             | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822775             | torch.Size([2, 512, 48, 256])    |
| 2004    | torch.Tensor.reshape                                                              | head.layers.24                                    | output              | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822775             | torch.Size([2, 512, 48, 8, 32])  |
| 2005    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.feat_mul                           | input_0             | torch.float32 |         | 0.0000007         | 0.5768605        | 0.0208333      | 0.0011102             | torch.Size([2, 512, 48, 8, 1])   |
| 2005    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.feat_mul                           | input_1             | torch.float32 |         | -39.2479286       | 28.3151855       | 0.0240557      | 2.8822775             | torch.Size([2, 512, 48, 8, 32])  |
| 2005    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.24.feat_mul                           | output              | torch.float32 |         | -2.2003171        | 2.1615214        | 0.0002572      | 0.0029098             | torch.Size([2, 512, 48, 8, 32])  |
| 2006    | torch.Tensor.view                                                                 | head.layers.24                                    | input_0             | torch.float32 |         | -2.2003171        | 2.1615214        | 0.0002572      | 0.0029098             | torch.Size([2, 512, 48, 8, 32])  |
| 2006    | torch.Tensor.view                                                                 | head.layers.24                                    | output              | torch.float32 |         | -2.2003171        | 2.1615214        | 0.0002572      | 0.0029098             | torch.Size([2, 512, 48, 256])    |
| 2007    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.24.feat_sum                           | input               | torch.float32 |         | -2.2003171        | 2.1615214        | 0.0002572      | 0.0029098             | torch.Size([2, 512, 48, 256])    |
| 2007    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.24.feat_sum                           | output              | torch.float32 |         | -6.2958660        | 5.8477793        | 0.0123464      | 0.3128531             | torch.Size([2, 512, 256])        |
| 2008    | torch.nn.modules.linear.Linear                                                    | head.layers.24.output_proj                        | input               | torch.float32 |         | -6.2958660        | 5.8477793        | 0.0123464      | 0.3128531             | torch.Size([2, 512, 256])        |
| 2008    | torch.nn.modules.linear.Linear                                                    | head.layers.24.output_proj                        | weight              | torch.float32 |         | -0.3212579        | 0.3928832        | -0.0001007     | 0.0072132             | torch.Size([256, 256])           |
| 2008    | torch.nn.modules.linear.Linear                                                    | head.layers.24.output_proj                        | bias                | torch.float32 |         | -0.0801640        | 0.1065602        | -0.0009339     | 0.0011949             | torch.Size([256])                |
| 2008    | torch.nn.modules.linear.Linear                                                    | head.layers.24.output_proj                        | output              | torch.float32 |         | -5.6556063        | 6.6268711        | 0.0069972      | 0.6528760             | torch.Size([2, 512, 256])        |
| 2009    | torch.nn.modules.dropout.Dropout                                                  | head.layers.24.proj_drop                          | input               | torch.float32 |         | -5.6556063        | 6.6268711        | 0.0069972      | 0.6528760             | torch.Size([2, 512, 256])        |
| 2009    | torch.nn.modules.dropout.Dropout                                                  | head.layers.24.proj_drop                          | output              | torch.float32 |         | -5.6556063        | 6.6268711        | 0.0069972      | 0.6528760             | torch.Size([2, 512, 256])        |
| 2010    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.residual_op                        | input_0             | torch.float32 |         | -5.6556063        | 6.6268711        | 0.0069972      | 0.6528760             | torch.Size([2, 512, 256])        |
| 2010    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.residual_op                        | input_1             | torch.float32 |         | -6.5373259        | 5.5762610        | 0.0051899      | 0.7193509             | torch.Size([2, 512, 256])        |
| 2010    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.24.residual_op                        | output              | torch.float32 |         | -6.5373259        | 6.6268711        | 0.0060936      | 0.6861130             | torch.Size([2, 512, 512])        |
| 2011    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.input_mean.mean           | input_0             | torch.float32 |         | -6.5373259        | 6.6268711        | 0.0060936      | 0.6861130             | torch.Size([2, 512, 512])        |
| 2011    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.input_mean.mean           | output              | torch.float32 |         | -0.0481027        | 0.0735760        | 0.0060936      | 0.0003103             | torch.Size([2, 512, 1])          |
| 2012    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.25.pre_norm.sub                       | input_0             | torch.float32 |         | -6.5373259        | 6.6268711        | 0.0060936      | 0.6861130             | torch.Size([2, 512, 512])        |
| 2012    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.25.pre_norm.sub                       | input_1             | torch.float32 |         | -0.0481027        | 0.0735760        | 0.0060936      | 0.0003103             | torch.Size([2, 512, 1])          |
| 2012    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.25.pre_norm.sub                       | output              | torch.float32 |         | -6.5392704        | 6.6355247        | -0.0000000     | 0.6858031             | torch.Size([2, 512, 512])        |
| 2013    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.mul                       | input_0             | torch.float32 |         | -6.5392704        | 6.6355247        | -0.0000000     | 0.6858031             | torch.Size([2, 512, 512])        |
| 2013    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.mul                       | input_1             | torch.float32 |         | -6.5392704        | 6.6355247        | -0.0000000     | 0.6858031             | torch.Size([2, 512, 512])        |
| 2013    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.mul                       | output              | torch.float32 |         | 0.0000000         | 44.0301895       | 0.6858017      | 6.7804537             | torch.Size([2, 512, 512])        |
| 2014    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 44.0301895       | 0.6858017      | 6.7804537             | torch.Size([2, 512, 512])        |
| 2014    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.25.pre_norm.var_mean.mean             | output              | torch.float32 |         | 0.3761760         | 2.4890873        | 0.6858017      | 0.0595869             | torch.Size([2, 512, 1])          |
| 2015    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.25.pre_norm.rsqrt                     | input               | torch.float32 |         | 0.3761760         | 2.4890873        | 0.6858017      | 0.0595869             | torch.Size([2, 512, 1])          |
| 2015    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.25.pre_norm.rsqrt                     | output              | torch.float32 |         | 0.6338391         | 1.6304170        | 1.2505172      | 0.0317054             | torch.Size([2, 512, 1])          |
| 2016    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.out_mul                   | input_0             | torch.float32 |         | -6.5392704        | 6.6355247        | -0.0000000     | 0.6858031             | torch.Size([2, 512, 512])        |
| 2016    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.out_mul                   | input_1             | torch.float32 |         | 0.6338391         | 1.6304170        | 1.2505172      | 0.0317054             | torch.Size([2, 512, 1])          |
| 2016    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.out_mul                   | output              | torch.float32 |         | -9.6354713        | 8.1155338        | -0.0000000     | 0.9999859             | torch.Size([2, 512, 512])        |
| 2017    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.25.pre_norm.weight_quant              | input               | torch.float32 |         | 0.6055309         | 1.5414252        | 1.0381298      | 0.0553940             | torch.Size([512])                |
| 2017    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.25.pre_norm.weight_quant              | output              | torch.float32 |         | 0.6055309         | 1.5414252        | 1.0381298      | 0.0553940             | torch.Size([512])                |
| 2018    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.weight_mul                | input_0             | torch.float32 |         | -9.6354713        | 8.1155338        | -0.0000000     | 0.9999859             | torch.Size([2, 512, 512])        |
| 2018    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.weight_mul                | input_1             | torch.float32 |         | 0.6055309         | 1.5414252        | 1.0381298      | 0.0553940             | torch.Size([512])                |
| 2018    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.25.pre_norm.weight_mul                | output              | torch.float32 |         | -5.8866577        | 5.7384629        | 0.0057474      | 0.7292251             | torch.Size([2, 512, 512])        |
| 2019    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.25.pre_norm.bias_quant                | input               | torch.float32 |         | -0.1894612        | 0.2801258        | -0.0025418     | 0.0019453             | torch.Size([512])                |
| 2019    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.25.pre_norm.bias_quant                | output              | torch.float32 |         | -0.1894612        | 0.2801258        | -0.0025418     | 0.0019453             | torch.Size([512])                |
| 2020    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.25.pre_norm.bias_add                  | input_0             | torch.float32 |         | -5.8866577        | 5.7384629        | 0.0057474      | 0.7292251             | torch.Size([2, 512, 512])        |
| 2020    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.25.pre_norm.bias_add                  | input_1             | torch.float32 |         | -0.1894612        | 0.2801258        | -0.0025418     | 0.0019453             | torch.Size([512])                |
| 2020    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.25.pre_norm.bias_add                  | output              | torch.float32 |         | -5.6785464        | 5.5510430        | 0.0032056      | 0.7061962             | torch.Size([2, 512, 512])        |
| 2021    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.0.0                         | input               | torch.float32 |         | -5.6785464        | 5.5510430        | 0.0032056      | 0.7061962             | torch.Size([2, 512, 512])        |
| 2021    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.0.0                         | weight              | torch.float32 |         | -0.4742253        | 0.4443682        | -0.0002076     | 0.0065301             | torch.Size([1024, 512])          |
| 2021    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.0.0                         | bias                | torch.float32 |         | -0.1643867        | 0.0346350        | -0.0589554     | 0.0010728             | torch.Size([1024])               |
| 2021    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.0.0                         | output              | torch.float32 |         | -14.9337330       | 13.5162630       | -3.0311475     | 7.7528653             | torch.Size([2, 512, 1024])       |
| 2022    | torch.nn.modules.activation.ReLU                                                  | head.layers.25.activate                           | input               | torch.float32 |         | 0.0000000         | 13.5162630       | 0.2246078      | 0.7025402             | torch.Size([2, 512, 1024])       |
| 2022    | torch.nn.modules.activation.ReLU                                                  | head.layers.25.activate                           | output              | torch.float32 |         | 0.0000000         | 13.5162630       | 0.2246078      | 0.7025402             | torch.Size([2, 512, 1024])       |
| 2023    | torch.nn.modules.dropout.Dropout                                                  | head.layers.25.layers.0.2                         | input               | torch.float32 |         | 0.0000000         | 13.5162630       | 0.2246078      | 0.7025402             | torch.Size([2, 512, 1024])       |
| 2023    | torch.nn.modules.dropout.Dropout                                                  | head.layers.25.layers.0.2                         | output              | torch.float32 |         | 0.0000000         | 13.5162630       | 0.2246078      | 0.7025402             | torch.Size([2, 512, 1024])       |
| 2024    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.1                           | input               | torch.float32 |         | 0.0000000         | 13.5162630       | 0.2246078      | 0.7025402             | torch.Size([2, 512, 1024])       |
| 2024    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.1                           | weight              | torch.float32 |         | -0.4354753        | 0.4189465        | 0.0000335      | 0.0068930             | torch.Size([256, 1024])          |
| 2024    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.1                           | bias                | torch.float32 |         | -0.0726037        | 0.0860158        | -0.0004128     | 0.0009016             | torch.Size([256])                |
| 2024    | torch.nn.modules.linear.Linear                                                    | head.layers.25.layers.1                           | output              | torch.float32 |         | -24.3115654       | 23.2888260       | 0.0394235      | 20.1230049            | torch.Size([2, 512, 256])        |
| 2025    | torch.nn.modules.dropout.Dropout                                                  | head.layers.25.layers.2                           | input               | torch.float32 |         | -24.3115654       | 23.2888260       | 0.0394235      | 20.1230049            | torch.Size([2, 512, 256])        |
| 2025    | torch.nn.modules.dropout.Dropout                                                  | head.layers.25.layers.2                           | output              | torch.float32 |         | -24.3115654       | 23.2888260       | 0.0394235      | 20.1230049            | torch.Size([2, 512, 256])        |
| 2026    | torch.nn.modules.linear.Linear                                                    | head.layers.25.identity_fc                        | input               | torch.float32 |         | -5.6785464        | 5.5510430        | 0.0032056      | 0.7061962             | torch.Size([2, 512, 512])        |
| 2026    | torch.nn.modules.linear.Linear                                                    | head.layers.25.identity_fc                        | weight              | torch.float32 |         | -0.3958582        | 0.4033061        | 0.0002529      | 0.0075558             | torch.Size([256, 512])           |
| 2026    | torch.nn.modules.linear.Linear                                                    | head.layers.25.identity_fc                        | bias                | torch.float32 |         | -0.0905164        | 0.0738403        | -0.0010515     | 0.0010065             | torch.Size([256])                |
| 2026    | torch.nn.modules.linear.Linear                                                    | head.layers.25.identity_fc                        | output              | torch.float32 |         | -14.6214371       | 12.4423504       | -0.0203219     | 9.5537977             | torch.Size([2, 512, 256])        |
| 2027    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.25.short_add                          | input_0             | torch.float32 |         | -14.6214371       | 12.4423504       | -0.0203219     | 9.5537977             | torch.Size([2, 512, 256])        |
| 2027    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.25.short_add                          | input_1             | torch.float32 |         | -24.3115654       | 23.2888260       | 0.0394235      | 20.1230049            | torch.Size([2, 512, 256])        |
| 2027    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.25.short_add                          | output              | torch.float32 |         | -28.2853851       | 29.6506729       | 0.0191016      | 41.0256653            | torch.Size([2, 512, 256])        |
| 2028    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.26.input_mean.mean                    | input_0             | torch.float32 |         | -28.2853851       | 29.6506729       | 0.0191016      | 41.0256653            | torch.Size([2, 512, 256])        |
| 2028    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.26.input_mean.mean                    | output              | torch.float32 |         | -0.1335631        | 0.1378610        | 0.0191016      | 0.0035529             | torch.Size([2, 512, 1])          |
| 2029    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.26.sub                                | input_0             | torch.float32 |         | -28.2853851       | 29.6506729       | 0.0191016      | 41.0256653            | torch.Size([2, 512, 256])        |
| 2029    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.26.sub                                | input_1             | torch.float32 |         | -0.1335631        | 0.1378610        | 0.0191016      | 0.0035529             | torch.Size([2, 512, 1])          |
| 2029    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.26.sub                                | output              | torch.float32 |         | -28.3323860       | 29.6036720       | -0.0000000     | 41.0221214            | torch.Size([2, 512, 256])        |
| 2030    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.mul                                | input_0             | torch.float32 |         | -28.3323860       | 29.6036720       | -0.0000000     | 41.0221214            | torch.Size([2, 512, 256])        |
| 2030    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.mul                                | input_1             | torch.float32 |         | -28.3323860       | 29.6036720       | -0.0000000     | 41.0221214            | torch.Size([2, 512, 256])        |
| 2030    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.mul                                | output              | torch.float32 |         | 0.0000000         | 876.3773804      | 41.0219650     | 9017.2138672          | torch.Size([2, 512, 256])        |
| 2031    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.26.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 876.3773804      | 41.0219650     | 9017.2138672          | torch.Size([2, 512, 256])        |
| 2031    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.26.var_mean.mean                      | output              | torch.float32 |         | 4.4994421         | 144.2321320      | 41.0219650     | 2585.1962891          | torch.Size([2, 512, 1])          |
| 2032    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.26.rsqrt                              | input               | torch.float32 |         | 4.4994421         | 144.2321320      | 41.0219650     | 2585.1962891          | torch.Size([2, 512, 1])          |
| 2032    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.26.rsqrt                              | output              | torch.float32 |         | 0.0832662         | 0.4714332        | 0.2510222      | 0.0114529             | torch.Size([2, 512, 1])          |
| 2033    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.out_mul                            | input_0             | torch.float32 |         | -28.3323860       | 29.6036720       | -0.0000000     | 41.0221214            | torch.Size([2, 512, 256])        |
| 2033    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.out_mul                            | input_1             | torch.float32 |         | 0.0832662         | 0.4714332        | 0.2510222      | 0.0114529             | torch.Size([2, 512, 1])          |
| 2033    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.out_mul                            | output              | torch.float32 |         | -4.7826443        | 3.6187088        | -0.0000000     | 1.0000031             | torch.Size([2, 512, 256])        |
| 2034    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.26.weight_quant                       | input               | torch.float32 |         | 0.7192894         | 1.0790963        | 0.9198906      | 0.0035739             | torch.Size([256])                |
| 2034    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.26.weight_quant                       | output              | torch.float32 |         | 0.7192894         | 1.0790963        | 0.9198906      | 0.0035739             | torch.Size([256])                |
| 2035    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.weight_mul                         | input_0             | torch.float32 |         | -4.7826443        | 3.6187088        | -0.0000000     | 1.0000031             | torch.Size([2, 512, 256])        |
| 2035    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.weight_mul                         | input_1             | torch.float32 |         | 0.7192894         | 1.0790963        | 0.9198906      | 0.0035739             | torch.Size([256])                |
| 2035    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.26.weight_mul                         | output              | torch.float32 |         | -4.7834606        | 3.4291825        | 0.0002420      | 0.8422604             | torch.Size([2, 512, 256])        |
| 2036    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.26.bias_quant                         | input               | torch.float32 |         | -0.0724428        | 0.1072301        | 0.0025768      | 0.0008598             | torch.Size([256])                |
| 2036    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.26.bias_quant                         | output              | torch.float32 |         | -0.0724428        | 0.1072301        | 0.0025768      | 0.0008598             | torch.Size([256])                |
| 2037    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.26.bias_add                           | input_0             | torch.float32 |         | -4.7834606        | 3.4291825        | 0.0002420      | 0.8422604             | torch.Size([2, 512, 256])        |
| 2037    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.26.bias_add                           | input_1             | torch.float32 |         | -0.0724428        | 0.1072301        | 0.0025768      | 0.0008598             | torch.Size([256])                |
| 2037    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.26.bias_add                           | output              | torch.float32 |         | -4.8112426        | 3.4337044        | 0.0028188      | 0.8370261             | torch.Size([2, 512, 256])        |
| 2038    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.add1                               | input_0             | torch.float32 |         | -4.8112426        | 3.4337044        | 0.0028188      | 0.8370261             | torch.Size([2, 512, 256])        |
| 2038    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.add1                               | input_1             | torch.float32 |         | -1.7142470        | 7.4040437        | 0.0533052      | 0.8556479             | torch.Size([2, 512, 256])        |
| 2038    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.add1                               | output              | torch.float32 |         | -4.2165580        | 8.4621429        | 0.0561239      | 1.4932463             | torch.Size([2, 512, 256])        |
| 2039    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.0                           | input               | torch.float32 |         | -4.2165580        | 8.4621429        | 0.0561239      | 1.4932463             | torch.Size([2, 512, 256])        |
| 2039    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.0                           | weight              | torch.float32 |         | -0.5372956        | 0.5631919        | 0.0002626      | 0.0057614             | torch.Size([256, 256])           |
| 2039    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.0                           | bias                | torch.float32 |         | -0.2096737        | 0.1025517        | -0.0419072     | 0.0025007             | torch.Size([256])                |
| 2039    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.0                           | output              | torch.float32 |         | -11.7174826       | 10.8728209       | -1.0297076     | 5.1488905             | torch.Size([2, 512, 256])        |
| 2040    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.1                           | input               | torch.float32 |         | 0.0000000         | 10.8728209       | 0.4741212      | 0.9199152             | torch.Size([2, 512, 256])        |
| 2040    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.1                           | output              | torch.float32 |         | 0.0000000         | 10.8728209       | 0.4741212      | 0.9199152             | torch.Size([2, 512, 256])        |
| 2041    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.2                           | input               | torch.float32 |         | 0.0000000         | 10.8728209       | 0.4741212      | 0.9199152             | torch.Size([2, 512, 256])        |
| 2041    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.2                           | weight              | torch.float32 |         | -0.6903200        | 0.4113263        | -0.0078947     | 0.0061080             | torch.Size([256, 256])           |
| 2041    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.2                           | bias                | torch.float32 |         | -0.1265819        | 0.1779750        | -0.0111210     | 0.0030116             | torch.Size([256])                |
| 2041    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.2                           | output              | torch.float32 |         | -11.7287245       | 7.6541786        | -0.9669812     | 4.1373844             | torch.Size([2, 512, 256])        |
| 2042    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.3                           | input               | torch.float32 |         | 0.0000000         | 7.6541786        | 0.3857269      | 0.5928929             | torch.Size([2, 512, 256])        |
| 2042    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.3                           | output              | torch.float32 |         | 0.0000000         | 7.6541786        | 0.3857269      | 0.5928929             | torch.Size([2, 512, 256])        |
| 2043    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 7.6541786        | 0.3857269      | 0.5928929             | torch.Size([2, 512, 256])        |
| 2043    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.input_mean.mean           | output              | torch.float32 |         | 0.2055765         | 0.6971165        | 0.3857269      | 0.0051186             | torch.Size([2, 512, 1])          |
| 2044    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.27.layers.4.sub                       | input_0             | torch.float32 |         | 0.0000000         | 7.6541786        | 0.3857269      | 0.5928929             | torch.Size([2, 512, 256])        |
| 2044    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.27.layers.4.sub                       | input_1             | torch.float32 |         | 0.2055765         | 0.6971165        | 0.3857269      | 0.0051186             | torch.Size([2, 512, 1])          |
| 2044    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.27.layers.4.sub                       | output              | torch.float32 |         | -0.6971165        | 7.1006136        | 0.0000000      | 0.5877793             | torch.Size([2, 512, 256])        |
| 2045    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.mul                       | input_0             | torch.float32 |         | -0.6971165        | 7.1006136        | 0.0000000      | 0.5877793             | torch.Size([2, 512, 256])        |
| 2045    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.mul                       | input_1             | torch.float32 |         | -0.6971165        | 7.1006136        | 0.0000000      | 0.5877793             | torch.Size([2, 512, 256])        |
| 2045    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.mul                       | output              | torch.float32 |         | 0.0000000         | 50.4187126       | 0.5877770      | 2.9774384             | torch.Size([2, 512, 256])        |
| 2046    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 50.4187126       | 0.5877770      | 2.9774384             | torch.Size([2, 512, 256])        |
| 2046    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.4.var_mean.mean             | output              | torch.float32 |         | 0.1733537         | 1.4804815        | 0.5877770      | 0.0458061             | torch.Size([2, 512, 1])          |
| 2047    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.27.layers.4.rsqrt                     | input               | torch.float32 |         | 0.1733537         | 1.4804815        | 0.5877770      | 0.0458061             | torch.Size([2, 512, 1])          |
| 2047    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.27.layers.4.rsqrt                     | output              | torch.float32 |         | 0.8218585         | 2.4017119        | 1.3735702      | 0.0712695             | torch.Size([2, 512, 1])          |
| 2048    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.out_mul                   | input_0             | torch.float32 |         | -0.6971165        | 7.1006136        | 0.0000000      | 0.5877793             | torch.Size([2, 512, 256])        |
| 2048    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.out_mul                   | input_1             | torch.float32 |         | 0.8218585         | 2.4017119        | 1.3735702      | 0.0712695             | torch.Size([2, 512, 1])          |
| 2048    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.out_mul                   | output              | torch.float32 |         | -0.6214122        | 7.5780916        | 0.0000000      | 0.9999842             | torch.Size([2, 512, 256])        |
| 2049    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.4.weight_quant              | input               | torch.float32 |         | 0.6927252         | 1.1722289        | 0.9681799      | 0.0055170             | torch.Size([256])                |
| 2049    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.4.weight_quant              | output              | torch.float32 |         | 0.6927252         | 1.1722289        | 0.9681799      | 0.0055170             | torch.Size([256])                |
| 2050    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.weight_mul                | input_0             | torch.float32 |         | -0.6214122        | 7.5780916        | 0.0000000      | 0.9999842             | torch.Size([2, 512, 256])        |
| 2050    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.weight_mul                | input_1             | torch.float32 |         | 0.6927252         | 1.1722289        | 0.9681799      | 0.0055170             | torch.Size([256])                |
| 2050    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.4.weight_mul                | output              | torch.float32 |         | -0.7284373        | 8.2987404        | 0.0056385      | 0.9673083             | torch.Size([2, 512, 256])        |
| 2051    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.4.bias_quant                | input               | torch.float32 |         | -0.1199606        | 0.2986090        | 0.0510115      | 0.0063596             | torch.Size([256])                |
| 2051    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.4.bias_quant                | output              | torch.float32 |         | -0.1199606        | 0.2986090        | 0.0510115      | 0.0063596             | torch.Size([256])                |
| 2052    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.layers.4.bias_add                  | input_0             | torch.float32 |         | -0.7284373        | 8.2987404        | 0.0056385      | 0.9673083             | torch.Size([2, 512, 256])        |
| 2052    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.layers.4.bias_add                  | input_1             | torch.float32 |         | -0.1199606        | 0.2986090        | 0.0510115      | 0.0063596             | torch.Size([256])                |
| 2052    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.layers.4.bias_add                  | output              | torch.float32 |         | -0.7347968        | 8.2923813        | 0.0566500      | 0.9307991             | torch.Size([2, 512, 256])        |
| 2053    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.5                           | input               | torch.float32 |         | -0.7347968        | 8.2923813        | 0.0566500      | 0.9307991             | torch.Size([2, 512, 256])        |
| 2053    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.5                           | weight              | torch.float32 |         | -0.4725817        | 0.4318931        | 0.0043037      | 0.0048818             | torch.Size([256, 256])           |
| 2053    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.5                           | bias                | torch.float32 |         | -0.1813288        | 0.0764300        | -0.0312060     | 0.0026632             | torch.Size([256])                |
| 2053    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.5                           | output              | torch.float32 |         | -8.1163788        | 10.5940866       | -0.8927885     | 3.7537169             | torch.Size([2, 512, 256])        |
| 2054    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.6                           | input               | torch.float32 |         | 0.0000000         | 10.5940866       | 0.4212714      | 0.8314920             | torch.Size([2, 512, 256])        |
| 2054    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.6                           | output              | torch.float32 |         | 0.0000000         | 10.5940866       | 0.4212714      | 0.8314920             | torch.Size([2, 512, 256])        |
| 2055    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.7                           | input               | torch.float32 |         | 0.0000000         | 10.5940866       | 0.4212714      | 0.8314920             | torch.Size([2, 512, 256])        |
| 2055    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.7                           | weight              | torch.float32 |         | -0.3544154        | 0.5146543        | -0.0073491     | 0.0036408             | torch.Size([256, 256])           |
| 2055    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.7                           | bias                | torch.float32 |         | -0.1227437        | 0.2899182        | -0.0230045     | 0.0021475             | torch.Size([256])                |
| 2055    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.7                           | output              | torch.float32 |         | -12.4737196       | 41.9587708       | -1.7373395     | 6.2891350             | torch.Size([2, 512, 256])        |
| 2056    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.8                           | input               | torch.float32 |         | 0.0000000         | 41.9587708       | 0.3267115      | 2.2678998             | torch.Size([2, 512, 256])        |
| 2056    | torch.nn.modules.activation.ReLU                                                  | head.layers.27.layers.8                           | output              | torch.float32 |         | 0.0000000         | 41.9587708       | 0.3267115      | 2.2678998             | torch.Size([2, 512, 256])        |
| 2057    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 41.9587708       | 0.3267115      | 2.2678998             | torch.Size([2, 512, 256])        |
| 2057    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.input_mean.mean           | output              | torch.float32 |         | 0.1649162         | 1.2215778        | 0.3267115      | 0.0192165             | torch.Size([2, 512, 1])          |
| 2058    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.27.layers.9.sub                       | input_0             | torch.float32 |         | 0.0000000         | 41.9587708       | 0.3267115      | 2.2678998             | torch.Size([2, 512, 256])        |
| 2058    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.27.layers.9.sub                       | input_1             | torch.float32 |         | 0.1649162         | 1.2215778        | 0.3267115      | 0.0192165             | torch.Size([2, 512, 1])          |
| 2058    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.27.layers.9.sub                       | output              | torch.float32 |         | -1.2215778        | 41.6361542       | -0.0000000     | 2.2487020             | torch.Size([2, 512, 256])        |
| 2059    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.mul                       | input_0             | torch.float32 |         | -1.2215778        | 41.6361542       | -0.0000000     | 2.2487020             | torch.Size([2, 512, 256])        |
| 2059    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.mul                       | input_1             | torch.float32 |         | -1.2215778        | 41.6361542       | -0.0000000     | 2.2487020             | torch.Size([2, 512, 256])        |
| 2059    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.mul                       | output              | torch.float32 |         | 0.0000000         | 1733.5693359     | 2.2486935      | 1086.6604004          | torch.Size([2, 512, 256])        |
| 2060    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 1733.5693359     | 2.2486935      | 1086.6604004          | torch.Size([2, 512, 256])        |
| 2060    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.27.layers.9.var_mean.mean             | output              | torch.float32 |         | 0.4506176         | 7.1817489        | 2.2486932      | 2.0699608             | torch.Size([2, 512, 1])          |
| 2061    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.27.layers.9.rsqrt                     | input               | torch.float32 |         | 0.4506176         | 7.1817489        | 2.2486932      | 2.0699608             | torch.Size([2, 512, 1])          |
| 2061    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.27.layers.9.rsqrt                     | output              | torch.float32 |         | 0.3731510         | 1.4896735        | 0.7694060      | 0.0520938             | torch.Size([2, 512, 1])          |
| 2062    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.out_mul                   | input_0             | torch.float32 |         | -1.2215778        | 41.6361542       | -0.0000000     | 2.2487020             | torch.Size([2, 512, 256])        |
| 2062    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.out_mul                   | input_1             | torch.float32 |         | 0.3731510         | 1.4896735        | 0.7694060      | 0.0520938             | torch.Size([2, 512, 1])          |
| 2062    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.out_mul                   | output              | torch.float32 |         | -0.4752313        | 15.7661915       | -0.0000000     | 0.9999974             | torch.Size([2, 512, 256])        |
| 2063    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.9.weight_quant              | input               | torch.float32 |         | 0.6694351         | 1.1924911        | 0.9463960      | 0.0051841             | torch.Size([256])                |
| 2063    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.9.weight_quant              | output              | torch.float32 |         | 0.6694351         | 1.1924911        | 0.9463960      | 0.0051841             | torch.Size([256])                |
| 2064    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.weight_mul                | input_0             | torch.float32 |         | -0.4752313        | 15.7661915       | -0.0000000     | 0.9999974             | torch.Size([2, 512, 256])        |
| 2064    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.weight_mul                | input_1             | torch.float32 |         | 0.6694351         | 1.1924911        | 0.9463960      | 0.0051841             | torch.Size([256])                |
| 2064    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.9.weight_mul                | output              | torch.float32 |         | -0.5667091        | 10.5544415       | -0.0126811     | 0.6285820             | torch.Size([2, 512, 256])        |
| 2065    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.9.bias_quant                | input               | torch.float32 |         | -0.3060245        | 0.0903289        | 0.0524159      | 0.0020540             | torch.Size([256])                |
| 2065    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.9.bias_quant                | output              | torch.float32 |         | -0.3060245        | 0.0903289        | 0.0524159      | 0.0020540             | torch.Size([256])                |
| 2066    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.layers.9.bias_add                  | input_0             | torch.float32 |         | -0.5667091        | 10.5544415       | -0.0126811     | 0.6285820             | torch.Size([2, 512, 256])        |
| 2066    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.layers.9.bias_add                  | input_1             | torch.float32 |         | -0.3060245        | 0.0903289        | 0.0524159      | 0.0020540             | torch.Size([256])                |
| 2066    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.layers.9.bias_add                  | output              | torch.float32 |         | -0.6241610        | 10.2484169       | 0.0397347      | 0.5897449             | torch.Size([2, 512, 256])        |
| 2067    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.10                          | input               | torch.float32 |         | -0.6241610        | 10.2484169       | 0.0397347      | 0.5897449             | torch.Size([2, 512, 256])        |
| 2067    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.10                          | weight              | torch.float32 |         | -0.3738195        | 0.3876365        | -0.0004279     | 0.0034504             | torch.Size([11, 256])            |
| 2067    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.10                          | bias                | torch.float32 |         | -0.0642515        | 0.0481880        | -0.0072857     | 0.0011733             | torch.Size([11])                 |
| 2067    | torch.nn.modules.linear.Linear                                                    | head.layers.27.layers.10                          | output              | torch.float32 |         | -14.0758104       | 9.9356279        | -0.0613833     | 2.1860006             | torch.Size([2, 512, 11])         |
| 2068    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.11.scale_quant_stub         | input               | torch.float32 |         | 0.1060089         | 0.8700237        | 0.3596667      | 0.0556504             | torch.Size([11])                 |
| 2068    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.27.layers.11.scale_quant_stub         | output              | torch.float32 |         | 0.1060089         | 0.8700237        | 0.3596667      | 0.0556504             | torch.Size([11])                 |
| 2069    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.11.mul                      | input_0             | torch.float32 |         | -14.0758104       | 9.9356279        | -0.0613833     | 2.1860006             | torch.Size([2, 512, 11])         |
| 2069    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.11.mul                      | input_1             | torch.float32 |         | 0.1060089         | 0.8700237        | 0.3596667      | 0.0556504             | torch.Size([11])                 |
| 2069    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.27.layers.11.mul                      | output              | torch.float32 |         | -12.2462893       | 7.4566193        | 0.0245195      | 0.6320433             | torch.Size([2, 512, 11])         |
| 2070    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.add2                               | input_0             | torch.float32 |         | -12.2462893       | 7.4566193        | 0.0245195      | 0.6320433             | torch.Size([2, 512, 11])         |
| 2070    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.add2                               | input_1             | torch.float32 |         | -53.5869904       | 53.6926079       | 0.2068973      | 79.3955536            | torch.Size([2, 512, 11])         |
| 2070    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.27.add2                               | output              | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2071    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(3)                                   | input               | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2071    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(3)                                   | output              | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2072    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2072    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -53.4885979       | 53.6353264       | 0.8896438      | 291.2262573           | torch.Size([2, 512, 3])          |
| 2073    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(5)                   | input               | torch.float32 |         | -53.4885979       | 53.6353264       | 0.8896438      | 291.2262573           | torch.Size([2, 512, 3])          |
| 2073    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(5)                   | weight              | torch.float32 |         | -0.9216561        | 0.9167990        | -0.0046354     | 0.1373587             | torch.Size([128, 3])             |
| 2073    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(5)                   | bias                | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3650480             | torch.Size([128])                |
| 2073    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(5)                   | output              | torch.float32 |         | -33.2242928       | 35.0465164       | -0.1269995     | 71.3468704            | torch.Size([2, 512, 128])        |
| 2074    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(5)                   | input               | torch.float32 |         | 0.0000000         | 35.0465164       | 2.9065866      | 26.0642605            | torch.Size([2, 512, 128])        |
| 2074    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(5)                   | output              | torch.float32 |         | 0.0000000         | 35.0465164       | 2.9065866      | 26.0642605            | torch.Size([2, 512, 128])        |
| 2075    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 35.0465164       | 2.9065866      | 26.0642605            | torch.Size([2, 512, 128])        |
| 2075    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(5)   | output              | torch.float32 |         | 0.2788236         | 7.3243356        | 2.9065866      | 3.9827611             | torch.Size([2, 512, 1])          |
| 2076    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 35.0465164       | 2.9065866      | 26.0642605            | torch.Size([2, 512, 128])        |
| 2076    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(5)               | input_1             | torch.float32 |         | 0.2788236         | 7.3243356        | 2.9065866      | 3.9827611             | torch.Size([2, 512, 1])          |
| 2076    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(5)               | output              | torch.float32 |         | -7.3243356        | 29.4089489       | 0.0000000      | 22.0853577            | torch.Size([2, 512, 128])        |
| 2077    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(5)               | input_0             | torch.float32 |         | -7.3243356        | 29.4089489       | 0.0000000      | 22.0853577            | torch.Size([2, 512, 128])        |
| 2077    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(5)               | input_1             | torch.float32 |         | -7.3243356        | 29.4089489       | 0.0000000      | 22.0853577            | torch.Size([2, 512, 128])        |
| 2077    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(5)               | output              | torch.float32 |         | 0.0000000         | 864.8862915      | 22.0851879     | 2658.1369629          | torch.Size([2, 512, 128])        |
| 2078    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 864.8862915      | 22.0851879     | 2658.1369629          | torch.Size([2, 512, 128])        |
| 2078    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(5)     | output              | torch.float32 |         | 0.1765516         | 77.2834625       | 22.0851898     | 469.4730225           | torch.Size([2, 512, 1])          |
| 2079    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(5)             | input               | torch.float32 |         | 0.1765516         | 77.2834625       | 22.0851898     | 469.4730225           | torch.Size([2, 512, 1])          |
| 2079    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(5)             | output              | torch.float32 |         | 0.1137514         | 2.3798628        | 0.7539309      | 0.7731653             | torch.Size([2, 512, 1])          |
| 2080    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(5)           | input_0             | torch.float32 |         | -7.3243356        | 29.4089489       | 0.0000000      | 22.0853577            | torch.Size([2, 512, 128])        |
| 2080    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(5)           | input_1             | torch.float32 |         | 0.1137514         | 2.3798628        | 0.7539309      | 0.7731653             | torch.Size([2, 512, 1])          |
| 2080    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(5)           | output              | torch.float32 |         | -0.8842736        | 3.8664691        | -0.0000000     | 0.9999942             | torch.Size([2, 512, 128])        |
| 2081    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(5)      | input               | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 2081    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(5)      | output              | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 2082    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(5)        | input_0             | torch.float32 |         | -0.8842736        | 3.8664691        | -0.0000000     | 0.9999942             | torch.Size([2, 512, 128])        |
| 2082    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(5)        | input_1             | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 2082    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(5)        | output              | torch.float32 |         | -1.0492187        | 4.8036847        | -0.0020064     | 0.9438987             | torch.Size([2, 512, 128])        |
| 2083    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(5)        | input               | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 2083    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(5)        | output              | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 2084    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(5)          | input_0             | torch.float32 |         | -1.0492187        | 4.8036847        | -0.0020064     | 0.9438987             | torch.Size([2, 512, 128])        |
| 2084    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(5)          | input_1             | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 2084    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(5)          | output              | torch.float32 |         | -1.0495149        | 4.7999258        | 0.0068140      | 0.9378733             | torch.Size([2, 512, 128])        |
| 2085    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(5)                   | input               | torch.float32 |         | -1.0495149        | 4.7999258        | 0.0068140      | 0.9378733             | torch.Size([2, 512, 128])        |
| 2085    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(5)                   | weight              | torch.float32 |         | -0.3750711        | 0.3968706        | 0.0019093      | 0.0048458             | torch.Size([128, 128])           |
| 2085    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(5)                   | bias                | torch.float32 |         | -0.1863807        | 0.1385574        | -0.0156467     | 0.0047256             | torch.Size([128])                |
| 2085    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(5)                   | output              | torch.float32 |         | -7.3503008        | 6.5739446        | -0.0804770     | 2.7133336             | torch.Size([2, 512, 128])        |
| 2086    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(5)                   | input               | torch.float32 |         | 0.0000000         | 6.5739446        | 0.5800522      | 0.9583497             | torch.Size([2, 512, 128])        |
| 2086    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(5)                   | output              | torch.float32 |         | 0.0000000         | 6.5739446        | 0.5800522      | 0.9583497             | torch.Size([2, 512, 128])        |
| 2087    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 6.5739446        | 0.5800522      | 0.9583497             | torch.Size([2, 512, 128])        |
| 2087    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(5)   | output              | torch.float32 |         | 0.2877981         | 1.1149176        | 0.5800521      | 0.0905023             | torch.Size([2, 512, 1])          |
| 2088    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 6.5739446        | 0.5800522      | 0.9583497             | torch.Size([2, 512, 128])        |
| 2088    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(5)               | input_1             | torch.float32 |         | 0.2877981         | 1.1149176        | 0.5800521      | 0.0905023             | torch.Size([2, 512, 1])          |
| 2088    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(5)               | output              | torch.float32 |         | -1.1149176        | 5.7615943        | 0.0000000      | 0.8679351             | torch.Size([2, 512, 128])        |
| 2089    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(5)               | input_0             | torch.float32 |         | -1.1149176        | 5.7615943        | 0.0000000      | 0.8679351             | torch.Size([2, 512, 128])        |
| 2089    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(5)               | input_1             | torch.float32 |         | -1.1149176        | 5.7615943        | 0.0000000      | 0.8679351             | torch.Size([2, 512, 128])        |
| 2089    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(5)               | output              | torch.float32 |         | 0.0000000         | 33.1959686       | 0.8679285      | 5.4269938             | torch.Size([2, 512, 128])        |
| 2090    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 33.1959686       | 0.8679285      | 5.4269938             | torch.Size([2, 512, 128])        |
| 2090    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(5)     | output              | torch.float32 |         | 0.3049569         | 2.1330404        | 0.8679285      | 0.4272151             | torch.Size([2, 512, 1])          |
| 2091    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(5)             | input               | torch.float32 |         | 0.3049569         | 2.1330404        | 0.8679285      | 0.4272151             | torch.Size([2, 512, 1])          |
| 2091    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(5)             | output              | torch.float32 |         | 0.6846986         | 1.8108131        | 1.2664011      | 0.1274354             | torch.Size([2, 512, 1])          |
| 2092    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(5)           | input_0             | torch.float32 |         | -1.1149176        | 5.7615943        | 0.0000000      | 0.8679351             | torch.Size([2, 512, 128])        |
| 2092    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(5)           | input_1             | torch.float32 |         | 0.6846986         | 1.8108131        | 1.2664011      | 0.1274354             | torch.Size([2, 512, 1])          |
| 2092    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(5)           | output              | torch.float32 |         | -0.7803194        | 7.0426607        | 0.0000000      | 0.9999903             | torch.Size([2, 512, 128])        |
| 2093    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(5)      | input               | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 2093    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(5)      | output              | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 2094    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(5)        | input_0             | torch.float32 |         | -0.7803194        | 7.0426607        | 0.0000000      | 0.9999903             | torch.Size([2, 512, 128])        |
| 2094    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(5)        | input_1             | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 2094    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(5)        | output              | torch.float32 |         | -0.9481950        | 6.9203086        | 0.0312785      | 0.9328706             | torch.Size([2, 512, 128])        |
| 2095    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(5)        | input               | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 2095    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(5)        | output              | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 2096    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(5)          | input_0             | torch.float32 |         | -0.9481950        | 6.9203086        | 0.0312785      | 0.9328706             | torch.Size([2, 512, 128])        |
| 2096    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(5)          | input_1             | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 2096    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(5)          | output              | torch.float32 |         | -0.9664105        | 6.9167643        | 0.0630808      | 0.9092157             | torch.Size([2, 512, 128])        |
| 2097    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(5)                   | input               | torch.float32 |         | -0.9664105        | 6.9167643        | 0.0630808      | 0.9092157             | torch.Size([2, 512, 128])        |
| 2097    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(5)                   | weight              | torch.float32 |         | -0.7504157        | 0.4182976        | -0.0024651     | 0.0052447             | torch.Size([128, 128])           |
| 2097    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(5)                   | bias                | torch.float32 |         | -0.1397866        | 0.1210779        | 0.0064616      | 0.0040949             | torch.Size([128])                |
| 2097    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(5)                   | output              | torch.float32 |         | -7.3028550        | 6.9591484        | -0.0416906     | 3.8376665             | torch.Size([2, 512, 128])        |
| 2098    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(5)                   | input               | torch.float32 |         | 0.0000000         | 6.9591484        | 0.7556536      | 1.2994782             | torch.Size([2, 512, 128])        |
| 2098    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(5)                   | output              | torch.float32 |         | 0.0000000         | 6.9591484        | 0.7556536      | 1.2994782             | torch.Size([2, 512, 128])        |
| 2099    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 6.9591484        | 0.7556536      | 1.2994782             | torch.Size([2, 512, 128])        |
| 2099    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(5)   | output              | torch.float32 |         | 0.5505865         | 1.0213631        | 0.7556535      | 0.0222843             | torch.Size([2, 512, 1])          |
| 2100    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 6.9591484        | 0.7556536      | 1.2994782             | torch.Size([2, 512, 128])        |
| 2100    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(5)               | input_1             | torch.float32 |         | 0.5505865         | 1.0213631        | 0.7556535      | 0.0222843             | torch.Size([2, 512, 1])          |
| 2100    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(5)               | output              | torch.float32 |         | -1.0213631        | 6.1507373        | 0.0000000      | 1.2772156             | torch.Size([2, 512, 128])        |
| 2101    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(5)               | input_0             | torch.float32 |         | -1.0213631        | 6.1507373        | 0.0000000      | 1.2772156             | torch.Size([2, 512, 128])        |
| 2101    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(5)               | input_1             | torch.float32 |         | -1.0213631        | 6.1507373        | 0.0000000      | 1.2772156             | torch.Size([2, 512, 128])        |
| 2101    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(5)               | output              | torch.float32 |         | 0.0000000         | 37.8315697       | 1.2772058      | 6.5456686             | torch.Size([2, 512, 128])        |
| 2102    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 37.8315697       | 1.2772058      | 6.5456686             | torch.Size([2, 512, 128])        |
| 2102    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(5)     | output              | torch.float32 |         | 0.8236796         | 1.9084858        | 1.2772058      | 0.1288435             | torch.Size([2, 512, 1])          |
| 2103    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(5)             | input               | torch.float32 |         | 0.8236796         | 1.9084858        | 1.2772058      | 0.1288435             | torch.Size([2, 512, 1])          |
| 2103    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(5)             | output              | torch.float32 |         | 0.7238597         | 1.1018392        | 0.9087050      | 0.0132638             | torch.Size([2, 512, 1])          |
| 2104    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(5)           | input_0             | torch.float32 |         | -1.0213631        | 6.1507373        | 0.0000000      | 1.2772156             | torch.Size([2, 512, 128])        |
| 2104    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(5)           | input_1             | torch.float32 |         | 0.7238597         | 1.1018392        | 0.9087050      | 0.0132638             | torch.Size([2, 512, 1])          |
| 2104    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(5)           | output              | torch.float32 |         | -0.7473981        | 5.0263152        | 0.0000000      | 0.9999992             | torch.Size([2, 512, 128])        |
| 2105    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(5)      | input               | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 2105    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(5)      | output              | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 2106    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(5)        | input_0             | torch.float32 |         | -0.7473981        | 5.0263152        | 0.0000000      | 0.9999992             | torch.Size([2, 512, 128])        |
| 2106    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(5)        | input_1             | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 2106    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(5)        | output              | torch.float32 |         | -0.8408087        | 5.1684585        | 0.0143784      | 0.9880540             | torch.Size([2, 512, 128])        |
| 2107    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(5)        | input               | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 2107    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(5)        | output              | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 2108    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(5)          | input_0             | torch.float32 |         | -0.8408087        | 5.1684585        | 0.0143784      | 0.9880540             | torch.Size([2, 512, 128])        |
| 2108    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(5)          | input_1             | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 2108    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(5)          | output              | torch.float32 |         | -0.8293442        | 5.2018795        | 0.0360163      | 0.9758843             | torch.Size([2, 512, 128])        |
| 2109    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(5)                   | input               | torch.float32 |         | -0.8293442        | 5.2018795        | 0.0360163      | 0.9758843             | torch.Size([2, 512, 128])        |
| 2109    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(5)                   | weight              | torch.float32 |         | -0.4264432        | 0.3183554        | 0.0005866      | 0.0053991             | torch.Size([128, 128])           |
| 2109    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(5)                   | bias                | torch.float32 |         | -0.1690418        | 0.1536980        | -0.0166056     | 0.0039884             | torch.Size([128])                |
| 2109    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(5)                   | output              | torch.float32 |         | -11.6191044       | 10.1944370       | -0.4175867     | 4.3160782             | torch.Size([2, 512, 128])        |
| 2110    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(5)                  | input               | torch.float32 |         | 0.0000000         | 10.1944370       | 0.6200405      | 1.5471147             | torch.Size([2, 512, 128])        |
| 2110    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(5)                  | output              | torch.float32 |         | 0.0000000         | 10.1944370       | 0.6200405      | 1.5471147             | torch.Size([2, 512, 128])        |
| 2111    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(5)  | input_0             | torch.float32 |         | 0.0000000         | 10.1944370       | 0.6200405      | 1.5471147             | torch.Size([2, 512, 128])        |
| 2111    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(5)  | output              | torch.float32 |         | 0.5250081         | 0.7298517        | 0.6200405      | 0.0017415             | torch.Size([2, 512, 1])          |
| 2112    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(5)              | input_0             | torch.float32 |         | 0.0000000         | 10.1944370       | 0.6200405      | 1.5471147             | torch.Size([2, 512, 128])        |
| 2112    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(5)              | input_1             | torch.float32 |         | 0.5250081         | 0.7298517        | 0.6200405      | 0.0017415             | torch.Size([2, 512, 1])          |
| 2112    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(5)              | output              | torch.float32 |         | -0.7298517        | 9.6398878        | 0.0000000      | 1.5453749             | torch.Size([2, 512, 128])        |
| 2113    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(5)              | input_0             | torch.float32 |         | -0.7298517        | 9.6398878        | 0.0000000      | 1.5453749             | torch.Size([2, 512, 128])        |
| 2113    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(5)              | input_1             | torch.float32 |         | -0.7298517        | 9.6398878        | 0.0000000      | 1.5453749             | torch.Size([2, 512, 128])        |
| 2113    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(5)              | output              | torch.float32 |         | 0.0000000         | 92.9274368       | 1.5453631      | 24.8013992            | torch.Size([2, 512, 128])        |
| 2114    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(5)    | input_0             | torch.float32 |         | 0.0000000         | 92.9274368       | 1.5453631      | 24.8013992            | torch.Size([2, 512, 128])        |
| 2114    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(5)    | output              | torch.float32 |         | 1.0462710         | 1.9424751        | 1.5453631      | 0.0418079             | torch.Size([2, 512, 1])          |
| 2115    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(5)            | input               | torch.float32 |         | 1.0462710         | 1.9424751        | 1.5453631      | 0.0418079             | torch.Size([2, 512, 1])          |
| 2115    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(5)            | output              | torch.float32 |         | 0.7174988         | 0.9776329        | 0.8098384      | 0.0029885             | torch.Size([2, 512, 1])          |
| 2116    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(5)          | input_0             | torch.float32 |         | -0.7298517        | 9.6398878        | 0.0000000      | 1.5453749             | torch.Size([2, 512, 128])        |
| 2116    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(5)          | input_1             | torch.float32 |         | 0.7174988         | 0.9776329        | 0.8098384      | 0.0029885             | torch.Size([2, 512, 1])          |
| 2116    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(5)          | output              | torch.float32 |         | -0.5915809        | 7.3303647        | -0.0000000     | 1.0000011             | torch.Size([2, 512, 128])        |
| 2117    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(5)     | input               | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 2117    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(5)     | output              | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 2118    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(5)       | input_0             | torch.float32 |         | -0.5915809        | 7.3303647        | -0.0000000     | 1.0000011             | torch.Size([2, 512, 128])        |
| 2118    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(5)       | input_1             | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 2118    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(5)       | output              | torch.float32 |         | -0.8263150        | 7.4046288        | 0.0099244      | 0.9071248             | torch.Size([2, 512, 128])        |
| 2119    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(5)       | input               | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 2119    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(5)       | output              | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 2120    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(5)         | input_0             | torch.float32 |         | -0.8263150        | 7.4046288        | 0.0099244      | 0.9071248             | torch.Size([2, 512, 128])        |
| 2120    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(5)         | input_1             | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 2120    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(5)         | output              | torch.float32 |         | -0.8304361        | 7.3573351        | 0.0719146      | 0.8676326             | torch.Size([2, 512, 128])        |
| 2121    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2121    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -0.9520481        | 2.8980398        | 0.1690388      | 0.4525367             | torch.Size([2, 512, 3])          |
| 2122    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(5)                  | input               | torch.float32 |         | -0.9520481        | 2.8980398        | 0.1690388      | 0.4525367             | torch.Size([2, 512, 3])          |
| 2122    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(5)                  | weight              | torch.float32 |         | -0.8288664        | 0.6362330        | 0.0683853      | 0.1118651             | torch.Size([32, 3])              |
| 2122    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(5)                  | bias                | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1068659             | torch.Size([32])                 |
| 2122    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(5)                  | output              | torch.float32 |         | -2.0687222        | 2.4637070        | 0.0963218      | 0.2485777             | torch.Size([2, 512, 32])         |
| 2123    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(5)                  | input               | torch.float32 |         | 0.0000000         | 2.4637070        | 0.2510491      | 0.1017612             | torch.Size([2, 512, 32])         |
| 2123    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(5)                  | output              | torch.float32 |         | 0.0000000         | 2.4637070        | 0.2510491      | 0.1017612             | torch.Size([2, 512, 32])         |
| 2124    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(5)  | input_0             | torch.float32 |         | 0.0000000         | 2.4637070        | 0.2510491      | 0.1017612             | torch.Size([2, 512, 32])         |
| 2124    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(5)  | output              | torch.float32 |         | 0.1580121         | 0.7087687        | 0.2510491      | 0.0137535             | torch.Size([2, 512, 1])          |
| 2125    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(5)              | input_0             | torch.float32 |         | 0.0000000         | 2.4637070        | 0.2510491      | 0.1017612             | torch.Size([2, 512, 32])         |
| 2125    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(5)              | input_1             | torch.float32 |         | 0.1580121         | 0.7087687        | 0.2510491      | 0.0137535             | torch.Size([2, 512, 1])          |
| 2125    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(5)              | output              | torch.float32 |         | -0.7087687        | 1.7549382        | 0.0000000      | 0.0880208             | torch.Size([2, 512, 32])         |
| 2126    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(5)              | input_0             | torch.float32 |         | -0.7087687        | 1.7549382        | 0.0000000      | 0.0880208             | torch.Size([2, 512, 32])         |
| 2126    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(5)              | input_1             | torch.float32 |         | -0.7087687        | 1.7549382        | 0.0000000      | 0.0880208             | torch.Size([2, 512, 32])         |
| 2126    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(5)              | output              | torch.float32 |         | 0.0000000         | 3.0798082        | 0.0880181      | 0.0270342             | torch.Size([2, 512, 32])         |
| 2127    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(5)    | input_0             | torch.float32 |         | 0.0000000         | 3.0798082        | 0.0880181      | 0.0270342             | torch.Size([2, 512, 32])         |
| 2127    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(5)    | output              | torch.float32 |         | 0.0352554         | 0.4837159        | 0.0880181      | 0.0045599             | torch.Size([2, 512, 1])          |
| 2128    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(5)            | input               | torch.float32 |         | 0.0352554         | 0.4837159        | 0.0880181      | 0.0045599             | torch.Size([2, 512, 1])          |
| 2128    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(5)            | output              | torch.float32 |         | 1.4378061         | 5.3250732        | 3.9116015      | 1.0773596             | torch.Size([2, 512, 1])          |
| 2129    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(5)          | input_0             | torch.float32 |         | -0.7087687        | 1.7549382        | 0.0000000      | 0.0880208             | torch.Size([2, 512, 32])         |
| 2129    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(5)          | input_1             | torch.float32 |         | 1.4378061         | 5.3250732        | 3.9116015      | 1.0773596             | torch.Size([2, 512, 1])          |
| 2129    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(5)          | output              | torch.float32 |         | -1.1181829        | 3.0565920        | 0.0000000      | 0.9998667             | torch.Size([2, 512, 32])         |
| 2130    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(5)     | input               | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 2130    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(5)     | output              | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 2131    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(5)       | input_0             | torch.float32 |         | -1.1181829        | 3.0565920        | 0.0000000      | 0.9998667             | torch.Size([2, 512, 32])         |
| 2131    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(5)       | input_1             | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 2131    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(5)       | output              | torch.float32 |         | -1.2437234        | 3.2770019        | 0.0100029      | 0.9948031             | torch.Size([2, 512, 32])         |
| 2132    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(5)       | input               | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 2132    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(5)       | output              | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 2133    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(5)         | input_0             | torch.float32 |         | -1.2437234        | 3.2770019        | 0.0100029      | 0.9948031             | torch.Size([2, 512, 32])         |
| 2133    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(5)         | input_1             | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 2133    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(5)         | output              | torch.float32 |         | -1.2208853        | 3.2733810        | 0.0135291      | 0.9485546             | torch.Size([2, 512, 32])         |
| 2134    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(5)                  | input               | torch.float32 |         | -1.2208853        | 3.2733810        | 0.0135291      | 0.9485546             | torch.Size([2, 512, 32])         |
| 2134    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(5)                  | weight              | torch.float32 |         | -0.5793310        | 0.5422795        | -0.0032135     | 0.0176575             | torch.Size([32, 32])             |
| 2134    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(5)                  | bias                | torch.float32 |         | -0.1716317        | 0.2230143        | 0.0007250      | 0.0126328             | torch.Size([32])                 |
| 2134    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(5)                  | output              | torch.float32 |         | -4.2852383        | 2.1473641        | -0.2305952     | 1.3941813             | torch.Size([2, 512, 32])         |
| 2135    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(5)                  | input               | torch.float32 |         | 0.0000000         | 2.1473641        | 0.3594977      | 0.2479380             | torch.Size([2, 512, 32])         |
| 2135    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(5)                  | output              | torch.float32 |         | 0.0000000         | 2.1473641        | 0.3594977      | 0.2479380             | torch.Size([2, 512, 32])         |
| 2136    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(5)  | input_0             | torch.float32 |         | 0.0000000         | 2.1473641        | 0.3594977      | 0.2479380             | torch.Size([2, 512, 32])         |
| 2136    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(5)  | output              | torch.float32 |         | 0.2741665         | 0.4184157        | 0.3594977      | 0.0008236             | torch.Size([2, 512, 1])          |
| 2137    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(5)              | input_0             | torch.float32 |         | 0.0000000         | 2.1473641        | 0.3594977      | 0.2479380             | torch.Size([2, 512, 32])         |
| 2137    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(5)              | input_1             | torch.float32 |         | 0.2741665         | 0.4184157        | 0.3594977      | 0.0008236             | torch.Size([2, 512, 1])          |
| 2137    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(5)              | output              | torch.float32 |         | -0.4184157        | 1.7878007        | 0.0000000      | 0.2471152             | torch.Size([2, 512, 32])         |
| 2138    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(5)              | input_0             | torch.float32 |         | -0.4184157        | 1.7878007        | 0.0000000      | 0.2471152             | torch.Size([2, 512, 32])         |
| 2138    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(5)              | input_1             | torch.float32 |         | -0.4184157        | 1.7878007        | 0.0000000      | 0.2471152             | torch.Size([2, 512, 32])         |
| 2138    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(5)              | output              | torch.float32 |         | 0.0000000         | 3.1962311        | 0.2471077      | 0.1542506             | torch.Size([2, 512, 32])         |
| 2139    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(5)    | input_0             | torch.float32 |         | 0.0000000         | 3.1962311        | 0.2471077      | 0.1542506             | torch.Size([2, 512, 32])         |
| 2139    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(5)    | output              | torch.float32 |         | 0.1540101         | 0.3339099        | 0.2471077      | 0.0029843             | torch.Size([2, 512, 1])          |
| 2140    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(5)            | input               | torch.float32 |         | 0.1540101         | 0.3339099        | 0.2471077      | 0.0029843             | torch.Size([2, 512, 1])          |
| 2140    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(5)            | output              | torch.float32 |         | 1.7305288         | 2.5480697        | 2.0541382      | 0.0654127             | torch.Size([2, 512, 1])          |
| 2141    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(5)          | input_0             | torch.float32 |         | -0.4184157        | 1.7878007        | 0.0000000      | 0.2471152             | torch.Size([2, 512, 32])         |
| 2141    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(5)          | input_1             | torch.float32 |         | 1.7305288         | 2.5480697        | 2.0541382      | 0.0654127             | torch.Size([2, 512, 1])          |
| 2141    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(5)          | output              | torch.float32 |         | -0.9098247        | 3.7824233        | 0.0000000      | 0.9999877             | torch.Size([2, 512, 32])         |
| 2142    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(5)     | input               | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 2142    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(5)     | output              | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 2143    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(5)       | input_0             | torch.float32 |         | -0.9098247        | 3.7824233        | 0.0000000      | 0.9999877             | torch.Size([2, 512, 32])         |
| 2143    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(5)       | input_1             | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 2143    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(5)       | output              | torch.float32 |         | -0.9179578        | 3.6144907        | 0.0096358      | 0.9918696             | torch.Size([2, 512, 32])         |
| 2144    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(5)       | input               | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 2144    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(5)       | output              | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 2145    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(5)         | input_0             | torch.float32 |         | -0.9179578        | 3.6144907        | 0.0096358      | 0.9918696             | torch.Size([2, 512, 32])         |
| 2145    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(5)         | input_1             | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 2145    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(5)         | output              | torch.float32 |         | -0.9160222        | 3.6425133        | 0.0193979      | 0.9654712             | torch.Size([2, 512, 32])         |
| 2146    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(5)                  | input               | torch.float32 |         | -0.9160222        | 3.6425133        | 0.0193979      | 0.9654712             | torch.Size([2, 512, 32])         |
| 2146    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(5)                  | weight              | torch.float32 |         | -0.5712157        | 0.5219681        | -0.0062917     | 0.0166056             | torch.Size([32, 32])             |
| 2146    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(5)                  | bias                | torch.float32 |         | -0.1649730        | 0.2318604        | 0.0253026      | 0.0136139             | torch.Size([32])                 |
| 2146    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(5)                  | output              | torch.float32 |         | -4.6068959        | 2.6295705        | -0.1432027     | 1.2447866             | torch.Size([2, 512, 32])         |
| 2147    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(5)                  | input               | torch.float32 |         | 0.0000000         | 2.6295705        | 0.3669087      | 0.2785589             | torch.Size([2, 512, 32])         |
| 2147    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(5)                  | output              | torch.float32 |         | 0.0000000         | 2.6295705        | 0.3669087      | 0.2785589             | torch.Size([2, 512, 32])         |
| 2148    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(5)  | input_0             | torch.float32 |         | 0.0000000         | 2.6295705        | 0.3669087      | 0.2785589             | torch.Size([2, 512, 32])         |
| 2148    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(5)  | output              | torch.float32 |         | 0.1916270         | 0.4927105        | 0.3669087      | 0.0088240             | torch.Size([2, 512, 1])          |
| 2149    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(5)              | input_0             | torch.float32 |         | 0.0000000         | 2.6295705        | 0.3669087      | 0.2785589             | torch.Size([2, 512, 32])         |
| 2149    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(5)              | input_1             | torch.float32 |         | 0.1916270         | 0.4927105        | 0.3669087      | 0.0088240             | torch.Size([2, 512, 1])          |
| 2149    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(5)              | output              | torch.float32 |         | -0.4927105        | 2.1869061        | -0.0000000     | 0.2697433             | torch.Size([2, 512, 32])         |
| 2150    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(5)              | input_0             | torch.float32 |         | -0.4927105        | 2.1869061        | -0.0000000     | 0.2697433             | torch.Size([2, 512, 32])         |
| 2150    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(5)              | input_1             | torch.float32 |         | -0.4927105        | 2.1869061        | -0.0000000     | 0.2697433             | torch.Size([2, 512, 32])         |
| 2150    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(5)              | output              | torch.float32 |         | 0.0000000         | 4.7825584        | 0.2697350      | 0.3309379             | torch.Size([2, 512, 32])         |
| 2151    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(5)    | input_0             | torch.float32 |         | 0.0000000         | 4.7825584        | 0.2697350      | 0.3309379             | torch.Size([2, 512, 32])         |
| 2151    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(5)    | output              | torch.float32 |         | 0.1382402         | 0.3825630        | 0.2697350      | 0.0060961             | torch.Size([2, 512, 1])          |
| 2152    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(5)            | input               | torch.float32 |         | 0.1382402         | 0.3825630        | 0.2697350      | 0.0060961             | torch.Size([2, 512, 1])          |
| 2152    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(5)            | output              | torch.float32 |         | 1.6167499         | 2.6894722        | 2.0035591      | 0.1258495             | torch.Size([2, 512, 1])          |
| 2153    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(5)          | input_0             | torch.float32 |         | -0.4927105        | 2.1869061        | -0.0000000     | 0.2697433             | torch.Size([2, 512, 32])         |
| 2153    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(5)          | input_1             | torch.float32 |         | 1.6167499         | 2.6894722        | 2.0035591      | 0.1258495             | torch.Size([2, 512, 1])          |
| 2153    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(5)          | output              | torch.float32 |         | -0.9415423        | 3.8093743        | -0.0000000     | 0.9999891             | torch.Size([2, 512, 32])         |
| 2154    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(5)     | input               | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 2154    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(5)     | output              | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 2155    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(5)       | input_0             | torch.float32 |         | -0.9415423        | 3.8093743        | -0.0000000     | 0.9999891             | torch.Size([2, 512, 32])         |
| 2155    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(5)       | input_1             | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 2155    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(5)       | output              | torch.float32 |         | -1.0654004        | 4.0232983        | 0.0041900      | 1.0275396             | torch.Size([2, 512, 32])         |
| 2156    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(5)       | input               | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 2156    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(5)       | output              | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 2157    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(5)         | input_0             | torch.float32 |         | -1.0654004        | 4.0232983        | 0.0041900      | 1.0275396             | torch.Size([2, 512, 32])         |
| 2157    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(5)         | input_1             | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 2157    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(5)         | output              | torch.float32 |         | -1.0343196        | 4.0482268        | 0.0083862      | 1.0072665             | torch.Size([2, 512, 32])         |
| 2158    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(5)                  | input               | torch.float32 |         | -1.0343196        | 4.0482268        | 0.0083862      | 1.0072665             | torch.Size([2, 512, 32])         |
| 2158    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(5)                  | weight              | torch.float32 |         | -0.3204980        | 0.3365203        | -0.0020388     | 0.0145364             | torch.Size([32, 32])             |
| 2158    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(5)                  | bias                | torch.float32 |         | -0.1559148        | 0.2119379        | 0.0091616      | 0.0105488             | torch.Size([32])                 |
| 2158    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(5)                  | output              | torch.float32 |         | -2.3890009        | 2.6779075        | 0.0316662      | 0.7723114             | torch.Size([2, 512, 32])         |
| 2159    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(5)                 | input               | torch.float32 |         | 0.0000000         | 2.6779075        | 0.3610236      | 0.2782862             | torch.Size([2, 512, 32])         |
| 2159    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(5)                 | output              | torch.float32 |         | 0.0000000         | 2.6779075        | 0.3610236      | 0.2782862             | torch.Size([2, 512, 32])         |
| 2160    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(5) | input_0             | torch.float32 |         | 0.0000000         | 2.6779075        | 0.3610236      | 0.2782862             | torch.Size([2, 512, 32])         |
| 2160    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(5) | output              | torch.float32 |         | 0.2617844         | 0.5690346        | 0.3610236      | 0.0025902             | torch.Size([2, 512, 1])          |
| 2161    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(5)             | input_0             | torch.float32 |         | 0.0000000         | 2.6779075        | 0.3610236      | 0.2782862             | torch.Size([2, 512, 32])         |
| 2161    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(5)             | input_1             | torch.float32 |         | 0.2617844         | 0.5690346        | 0.3610236      | 0.0025902             | torch.Size([2, 512, 1])          |
| 2161    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(5)             | output              | torch.float32 |         | -0.5690346        | 2.2661729        | 0.0000000      | 0.2756984             | torch.Size([2, 512, 32])         |
| 2162    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(5)             | input_0             | torch.float32 |         | -0.5690346        | 2.2661729        | 0.0000000      | 0.2756984             | torch.Size([2, 512, 32])         |
| 2162    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(5)             | input_1             | torch.float32 |         | -0.5690346        | 2.2661729        | 0.0000000      | 0.2756984             | torch.Size([2, 512, 32])         |
| 2162    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(5)             | output              | torch.float32 |         | 0.0000000         | 5.1355395        | 0.2756900      | 0.3685890             | torch.Size([2, 512, 32])         |
| 2163    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 5.1355395        | 0.2756900      | 0.3685890             | torch.Size([2, 512, 32])         |
| 2163    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(5)   | output              | torch.float32 |         | 0.1901981         | 0.4058985        | 0.2756900      | 0.0016864             | torch.Size([2, 512, 1])          |
| 2164    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(5)           | input               | torch.float32 |         | 0.1901981         | 0.4058985        | 0.2756900      | 0.0016864             | torch.Size([2, 512, 1])          |
| 2164    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(5)           | output              | torch.float32 |         | 1.5695889         | 2.2929018        | 1.9199083      | 0.0194978             | torch.Size([2, 512, 1])          |
| 2165    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(5)         | input_0             | torch.float32 |         | -0.5690346        | 2.2661729        | 0.0000000      | 0.2756984             | torch.Size([2, 512, 32])         |
| 2165    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(5)         | input_1             | torch.float32 |         | 1.5695889         | 2.2929018        | 1.9199083      | 0.0194978             | torch.Size([2, 512, 1])          |
| 2165    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(5)         | output              | torch.float32 |         | -1.0683569        | 3.8809440        | 0.0000000      | 0.9999934             | torch.Size([2, 512, 32])         |
| 2166    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(5)    | input               | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 2166    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(5)    | output              | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 2167    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(5)      | input_0             | torch.float32 |         | -1.0683569        | 3.8809440        | 0.0000000      | 0.9999934             | torch.Size([2, 512, 32])         |
| 2167    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(5)      | input_1             | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 2167    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(5)      | output              | torch.float32 |         | -1.7744402        | 5.0625663        | -0.0250352     | 1.4572159             | torch.Size([2, 512, 32])         |
| 2168    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(5)      | input               | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 2168    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(5)      | output              | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 2169    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(5)        | input_0             | torch.float32 |         | -1.7744402        | 5.0625663        | -0.0250352     | 1.4572159             | torch.Size([2, 512, 32])         |
| 2169    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(5)        | input_1             | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 2169    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(5)        | output              | torch.float32 |         | -1.7250781        | 5.1119285        | 0.0195334      | 1.3801482             | torch.Size([2, 512, 32])         |
| 2170    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2170    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -1.3170593        | 1.3467246        | -0.0102497     | 0.1795993             | torch.Size([2, 512, 2])          |
| 2171    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(5)                   | input               | torch.float32 |         | -1.3170593        | 1.3467246        | -0.0102497     | 0.1795993             | torch.Size([2, 512, 2])          |
| 2171    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(5)                   | weight              | torch.float32 |         | -0.7023237        | 0.7394427        | 0.0490668      | 0.1972211             | torch.Size([32, 2])              |
| 2171    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(5)                   | bias                | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1641774             | torch.Size([32])                 |
| 2171    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(5)                   | output              | torch.float32 |         | -1.8000648        | 1.3127508        | -0.1185666     | 0.2304298             | torch.Size([2, 512, 32])         |
| 2172    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(5)                   | input               | torch.float32 |         | 0.0000000         | 1.3127508        | 0.1454131      | 0.0629034             | torch.Size([2, 512, 32])         |
| 2172    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(5)                   | output              | torch.float32 |         | 0.0000000         | 1.3127508        | 0.1454131      | 0.0629034             | torch.Size([2, 512, 32])         |
| 2173    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 1.3127508        | 0.1454131      | 0.0629034             | torch.Size([2, 512, 32])         |
| 2173    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(5)   | output              | torch.float32 |         | 0.1083149         | 0.2765612        | 0.1454131      | 0.0009766             | torch.Size([2, 512, 1])          |
| 2174    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 1.3127508        | 0.1454131      | 0.0629034             | torch.Size([2, 512, 32])         |
| 2174    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(5)               | input_1             | torch.float32 |         | 0.1083149         | 0.2765612        | 0.1454131      | 0.0009766             | torch.Size([2, 512, 1])          |
| 2174    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(5)               | output              | torch.float32 |         | -0.2765612        | 1.0440919        | 0.0000000      | 0.0619278             | torch.Size([2, 512, 32])         |
| 2175    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(5)               | input_0             | torch.float32 |         | -0.2765612        | 1.0440919        | 0.0000000      | 0.0619278             | torch.Size([2, 512, 32])         |
| 2175    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(5)               | input_1             | torch.float32 |         | -0.2765612        | 1.0440919        | 0.0000000      | 0.0619278             | torch.Size([2, 512, 32])         |
| 2175    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(5)               | output              | torch.float32 |         | 0.0000000         | 1.0901279        | 0.0619259      | 0.0142607             | torch.Size([2, 512, 32])         |
| 2176    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 1.0901279        | 0.0619259      | 0.0142607             | torch.Size([2, 512, 32])         |
| 2176    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(5)     | output              | torch.float32 |         | 0.0406501         | 0.1538975        | 0.0619259      | 0.0004149             | torch.Size([2, 512, 1])          |
| 2177    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(5)             | input               | torch.float32 |         | 0.0406501         | 0.1538975        | 0.0619259      | 0.0004149             | torch.Size([2, 512, 1])          |
| 2177    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(5)             | output              | torch.float32 |         | 2.5490017         | 4.9592457        | 4.1512432      | 0.3136728             | torch.Size([2, 512, 1])          |
| 2178    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(5)           | input_0             | torch.float32 |         | -0.2765612        | 1.0440919        | 0.0000000      | 0.0619278             | torch.Size([2, 512, 32])         |
| 2178    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(5)           | input_1             | torch.float32 |         | 2.5490017         | 4.9592457        | 4.1512432      | 0.3136728             | torch.Size([2, 512, 1])          |
| 2178    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(5)           | output              | torch.float32 |         | -0.7764226        | 4.0054712        | 0.0000000      | 0.9998550             | torch.Size([2, 512, 32])         |
| 2179    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(5)      | input               | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 2179    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(5)      | output              | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 2180    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(5)        | input_0             | torch.float32 |         | -0.7764226        | 4.0054712        | 0.0000000      | 0.9998550             | torch.Size([2, 512, 32])         |
| 2180    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(5)        | input_1             | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 2180    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(5)        | output              | torch.float32 |         | -0.8832930        | 4.3372707        | 0.0030797      | 1.0026244             | torch.Size([2, 512, 32])         |
| 2181    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(5)        | input               | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 2181    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(5)        | output              | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 2182    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(5)          | input_0             | torch.float32 |         | -0.8832930        | 4.3372707        | 0.0030797      | 1.0026244             | torch.Size([2, 512, 32])         |
| 2182    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(5)          | input_1             | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 2182    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(5)          | output              | torch.float32 |         | -0.8280069        | 4.2568092        | 0.0315836      | 0.9253476             | torch.Size([2, 512, 32])         |
| 2183    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(5)                   | input               | torch.float32 |         | -0.8280069        | 4.2568092        | 0.0315836      | 0.9253476             | torch.Size([2, 512, 32])         |
| 2183    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(5)                   | weight              | torch.float32 |         | -1.0547366        | 0.5812716        | 0.0070099      | 0.0187704             | torch.Size([32, 32])             |
| 2183    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(5)                   | bias                | torch.float32 |         | -0.2183180        | 0.1396109        | -0.0140744     | 0.0103446             | torch.Size([32])                 |
| 2183    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(5)                   | output              | torch.float32 |         | -5.3679876        | 1.9340119        | -0.5033642     | 1.4348891             | torch.Size([2, 512, 32])         |
| 2184    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(5)                   | input               | torch.float32 |         | 0.0000000         | 1.9340119        | 0.2281185      | 0.1270565             | torch.Size([2, 512, 32])         |
| 2184    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(5)                   | output              | torch.float32 |         | 0.0000000         | 1.9340119        | 0.2281185      | 0.1270565             | torch.Size([2, 512, 32])         |
| 2185    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 1.9340119        | 0.2281185      | 0.1270565             | torch.Size([2, 512, 32])         |
| 2185    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(5)   | output              | torch.float32 |         | 0.1705870         | 0.3744900        | 0.2281185      | 0.0012793             | torch.Size([2, 512, 1])          |
| 2186    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 1.9340119        | 0.2281185      | 0.1270565             | torch.Size([2, 512, 32])         |
| 2186    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(5)               | input_1             | torch.float32 |         | 0.1705870         | 0.3744900        | 0.2281185      | 0.0012793             | torch.Size([2, 512, 1])          |
| 2186    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(5)               | output              | torch.float32 |         | -0.3744900        | 1.6387405        | 0.0000000      | 0.1257785             | torch.Size([2, 512, 32])         |
| 2187    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(5)               | input_0             | torch.float32 |         | -0.3744900        | 1.6387405        | 0.0000000      | 0.1257785             | torch.Size([2, 512, 32])         |
| 2187    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(5)               | input_1             | torch.float32 |         | -0.3744900        | 1.6387405        | 0.0000000      | 0.1257785             | torch.Size([2, 512, 32])         |
| 2187    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(5)               | output              | torch.float32 |         | 0.0000000         | 2.6854706        | 0.1257747      | 0.0521318             | torch.Size([2, 512, 32])         |
| 2188    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 2.6854706        | 0.1257747      | 0.0521318             | torch.Size([2, 512, 32])         |
| 2188    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(5)     | output              | torch.float32 |         | 0.0743127         | 0.2492594        | 0.1257747      | 0.0009446             | torch.Size([2, 512, 1])          |
| 2189    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(5)             | input               | torch.float32 |         | 0.0743127         | 0.2492594        | 0.1257747      | 0.0009446             | torch.Size([2, 512, 1])          |
| 2189    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(5)             | output              | torch.float32 |         | 2.0029290         | 3.6680844        | 2.8750257      | 0.0980077             | torch.Size([2, 512, 1])          |
| 2190    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(5)           | input_0             | torch.float32 |         | -0.3744900        | 1.6387405        | 0.0000000      | 0.1257785             | torch.Size([2, 512, 32])         |
| 2190    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(5)           | input_1             | torch.float32 |         | 2.0029290         | 3.6680844        | 2.8750257      | 0.0980077             | torch.Size([2, 512, 1])          |
| 2190    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(5)           | output              | torch.float32 |         | -0.8504621        | 3.5671463        | 0.0000000      | 0.9999468             | torch.Size([2, 512, 32])         |
| 2191    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(5)      | input               | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 2191    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(5)      | output              | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 2192    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(5)        | input_0             | torch.float32 |         | -0.8504621        | 3.5671463        | 0.0000000      | 0.9999468             | torch.Size([2, 512, 32])         |
| 2192    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(5)        | input_1             | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 2192    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(5)        | output              | torch.float32 |         | -0.9249663        | 3.6515234        | -0.0014287     | 0.9757619             | torch.Size([2, 512, 32])         |
| 2193    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(5)        | input               | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 2193    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(5)        | output              | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 2194    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(5)          | input_0             | torch.float32 |         | -0.9249663        | 3.6515234        | -0.0014287     | 0.9757619             | torch.Size([2, 512, 32])         |
| 2194    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(5)          | input_1             | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 2194    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(5)          | output              | torch.float32 |         | -0.8596438        | 3.6987154        | 0.0228155      | 0.9225950             | torch.Size([2, 512, 32])         |
| 2195    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(5)                   | input               | torch.float32 |         | -0.8596438        | 3.6987154        | 0.0228155      | 0.9225950             | torch.Size([2, 512, 32])         |
| 2195    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(5)                   | weight              | torch.float32 |         | -0.4480607        | 0.3678726        | 0.0004879      | 0.0160908             | torch.Size([32, 32])             |
| 2195    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(5)                   | bias                | torch.float32 |         | -0.1861591        | 0.1739754        | 0.0155446      | 0.0137690             | torch.Size([32])                 |
| 2195    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(5)                   | output              | torch.float32 |         | -3.6262476        | 2.4650657        | -0.2709875     | 1.4215606             | torch.Size([2, 512, 32])         |
| 2196    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(5)                   | input               | torch.float32 |         | 0.0000000         | 2.4650657        | 0.3371799      | 0.2040717             | torch.Size([2, 512, 32])         |
| 2196    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(5)                   | output              | torch.float32 |         | 0.0000000         | 2.4650657        | 0.3371799      | 0.2040717             | torch.Size([2, 512, 32])         |
| 2197    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 2.4650657        | 0.3371799      | 0.2040717             | torch.Size([2, 512, 32])         |
| 2197    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(5)   | output              | torch.float32 |         | 0.2389664         | 0.5579766        | 0.3371799      | 0.0011636             | torch.Size([2, 512, 1])          |
| 2198    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 2.4650657        | 0.3371799      | 0.2040717             | torch.Size([2, 512, 32])         |
| 2198    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(5)               | input_1             | torch.float32 |         | 0.2389664         | 0.5579766        | 0.3371799      | 0.0011636             | torch.Size([2, 512, 1])          |
| 2198    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(5)               | output              | torch.float32 |         | -0.5579766        | 2.1724675        | -0.0000000     | 0.2029092             | torch.Size([2, 512, 32])         |
| 2199    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(5)               | input_0             | torch.float32 |         | -0.5579766        | 2.1724675        | -0.0000000     | 0.2029092             | torch.Size([2, 512, 32])         |
| 2199    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(5)               | input_1             | torch.float32 |         | -0.5579766        | 2.1724675        | -0.0000000     | 0.2029092             | torch.Size([2, 512, 32])         |
| 2199    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(5)               | output              | torch.float32 |         | 0.0000000         | 4.7196150        | 0.2029030      | 0.1211547             | torch.Size([2, 512, 32])         |
| 2200    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 4.7196150        | 0.2029030      | 0.1211547             | torch.Size([2, 512, 32])         |
| 2200    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(5)     | output              | torch.float32 |         | 0.1574094         | 0.4735225        | 0.2029030      | 0.0009662             | torch.Size([2, 512, 1])          |
| 2201    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(5)             | input               | torch.float32 |         | 0.1574094         | 0.4735225        | 0.2029030      | 0.0009662             | torch.Size([2, 512, 1])          |
| 2201    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(5)             | output              | torch.float32 |         | 1.4531990         | 2.5204082        | 2.2350845      | 0.0191000             | torch.Size([2, 512, 1])          |
| 2202    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(5)           | input_0             | torch.float32 |         | -0.5579766        | 2.1724675        | -0.0000000     | 0.2029092             | torch.Size([2, 512, 32])         |
| 2202    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(5)           | input_1             | torch.float32 |         | 1.4531990         | 2.5204082        | 2.2350845      | 0.0191000             | torch.Size([2, 512, 1])          |
| 2202    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(5)           | output              | torch.float32 |         | -0.8627743        | 4.4253621        | -0.0000000     | 0.9999803             | torch.Size([2, 512, 32])         |
| 2203    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(5)      | input               | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 2203    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(5)      | output              | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 2204    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(5)        | input_0             | torch.float32 |         | -0.8627743        | 4.4253621        | -0.0000000     | 0.9999803             | torch.Size([2, 512, 32])         |
| 2204    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(5)        | input_1             | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 2204    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(5)        | output              | torch.float32 |         | -0.9568561        | 4.7181005        | -0.0013931     | 0.9966017             | torch.Size([2, 512, 32])         |
| 2205    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(5)        | input               | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 2205    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(5)        | output              | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 2206    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(5)          | input_0             | torch.float32 |         | -0.9568561        | 4.7181005        | -0.0013931     | 0.9966017             | torch.Size([2, 512, 32])         |
| 2206    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(5)          | input_1             | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 2206    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(5)          | output              | torch.float32 |         | -0.9553727        | 4.7397065        | 0.0057766      | 0.9743876             | torch.Size([2, 512, 32])         |
| 2207    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(5)                   | input               | torch.float32 |         | -0.9553727        | 4.7397065        | 0.0057766      | 0.9743876             | torch.Size([2, 512, 32])         |
| 2207    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(5)                   | weight              | torch.float32 |         | -0.5597425        | 0.7001730        | 0.0015679      | 0.0160348             | torch.Size([32, 32])             |
| 2207    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(5)                   | bias                | torch.float32 |         | -0.1810580        | 0.1736723        | -0.0279047     | 0.0091159             | torch.Size([32])                 |
| 2207    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(5)                   | output              | torch.float32 |         | -4.3340092        | 3.4949043        | -0.2385980     | 1.1970040             | torch.Size([2, 512, 32])         |
| 2208    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(5)                  | input               | torch.float32 |         | 0.0000000         | 3.4949043        | 0.2896575      | 0.3068922             | torch.Size([2, 512, 32])         |
| 2208    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(5)                  | output              | torch.float32 |         | 0.0000000         | 3.4949043        | 0.2896575      | 0.3068922             | torch.Size([2, 512, 32])         |
| 2209    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(5)  | input_0             | torch.float32 |         | 0.0000000         | 3.4949043        | 0.2896575      | 0.3068922             | torch.Size([2, 512, 32])         |
| 2209    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(5)  | output              | torch.float32 |         | 0.2189074         | 0.3988135        | 0.2896575      | 0.0017103             | torch.Size([2, 512, 1])          |
| 2210    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(5)              | input_0             | torch.float32 |         | 0.0000000         | 3.4949043        | 0.2896575      | 0.3068922             | torch.Size([2, 512, 32])         |
| 2210    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(5)              | input_1             | torch.float32 |         | 0.2189074         | 0.3988135        | 0.2896575      | 0.0017103             | torch.Size([2, 512, 1])          |
| 2210    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(5)              | output              | torch.float32 |         | -0.3988135        | 3.1989157        | 0.0000000      | 0.3051835             | torch.Size([2, 512, 32])         |
| 2211    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(5)              | input_0             | torch.float32 |         | -0.3988135        | 3.1989157        | 0.0000000      | 0.3051835             | torch.Size([2, 512, 32])         |
| 2211    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(5)              | input_1             | torch.float32 |         | -0.3988135        | 3.1989157        | 0.0000000      | 0.3051835             | torch.Size([2, 512, 32])         |
| 2211    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(5)              | output              | torch.float32 |         | 0.0000000         | 10.2330618       | 0.3051741      | 0.8912749             | torch.Size([2, 512, 32])         |
| 2212    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(5)    | input_0             | torch.float32 |         | 0.0000000         | 10.2330618       | 0.3051741      | 0.8912749             | torch.Size([2, 512, 32])         |
| 2212    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(5)    | output              | torch.float32 |         | 0.1371754         | 0.4371316        | 0.3051742      | 0.0053275             | torch.Size([2, 512, 1])          |
| 2213    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(5)            | input               | torch.float32 |         | 0.1371754         | 0.4371316        | 0.3051742      | 0.0053275             | torch.Size([2, 512, 1])          |
| 2213    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(5)            | output              | torch.float32 |         | 1.5124775         | 2.6998899        | 1.8564613      | 0.0662578             | torch.Size([2, 512, 1])          |
| 2214    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(5)          | input_0             | torch.float32 |         | -0.3988135        | 3.1989157        | 0.0000000      | 0.3051835             | torch.Size([2, 512, 32])         |
| 2214    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(5)          | input_1             | torch.float32 |         | 1.5124775         | 2.6998899        | 1.8564613      | 0.0662578             | torch.Size([2, 512, 1])          |
| 2214    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(5)          | output              | torch.float32 |         | -0.7851027        | 4.8382883        | 0.0000000      | 0.9999954             | torch.Size([2, 512, 32])         |
| 2215    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(5)     | input               | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 2215    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(5)     | output              | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 2216    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(5)       | input_0             | torch.float32 |         | -0.7851027        | 4.8382883        | 0.0000000      | 0.9999954             | torch.Size([2, 512, 32])         |
| 2216    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(5)       | input_1             | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 2216    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(5)       | output              | torch.float32 |         | -1.1531858        | 5.2016792        | -0.0571555     | 0.9137468             | torch.Size([2, 512, 32])         |
| 2217    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(5)       | input               | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 2217    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(5)       | output              | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 2218    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(5)         | input_0             | torch.float32 |         | -1.1531858        | 5.2016792        | -0.0571555     | 0.9137468             | torch.Size([2, 512, 32])         |
| 2218    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(5)         | input_1             | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 2218    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(5)         | output              | torch.float32 |         | -0.9682196        | 5.3816223        | 0.0232236      | 0.8359140             | torch.Size([2, 512, 32])         |
| 2219    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2219    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -12.3630152       | 9.3720875        | -0.2033213     | 2.4317267             | torch.Size([2, 512, 3])          |
| 2220    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(5)                   | input               | torch.float32 |         | -12.3630152       | 9.3720875        | -0.2033213     | 2.4317267             | torch.Size([2, 512, 3])          |
| 2220    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(5)                   | weight              | torch.float32 |         | -1.0475703        | 0.9848034        | -0.0054673     | 0.2080412             | torch.Size([64, 3])              |
| 2220    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(5)                   | bias                | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1294928             | torch.Size([64])                 |
| 2220    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(5)                   | output              | torch.float32 |         | -11.3839579       | 13.2495089       | -0.0984238     | 1.8156146             | torch.Size([2, 512, 64])         |
| 2221    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(5)                   | input               | torch.float32 |         | 0.0000000         | 13.2495089       | 0.2983021      | 0.7210999             | torch.Size([2, 512, 64])         |
| 2221    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(5)                   | output              | torch.float32 |         | 0.0000000         | 13.2495089       | 0.2983021      | 0.7210999             | torch.Size([2, 512, 64])         |
| 2222    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 13.2495089       | 0.2983021      | 0.7210999             | torch.Size([2, 512, 64])         |
| 2222    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(5)   | output              | torch.float32 |         | 0.1200990         | 2.3533511        | 0.2983022      | 0.1583448             | torch.Size([2, 512, 1])          |
| 2223    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 13.2495089       | 0.2983021      | 0.7210999             | torch.Size([2, 512, 64])         |
| 2223    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(5)               | input_1             | torch.float32 |         | 0.1200990         | 2.3533511        | 0.2983022      | 0.1583448             | torch.Size([2, 512, 1])          |
| 2223    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(5)               | output              | torch.float32 |         | -2.3533511        | 10.9133921       | -0.0000000     | 0.5629072             | torch.Size([2, 512, 64])         |
| 2224    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(5)               | input_0             | torch.float32 |         | -2.3533511        | 10.9133921       | -0.0000000     | 0.5629072             | torch.Size([2, 512, 64])         |
| 2224    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(5)               | input_1             | torch.float32 |         | -2.3533511        | 10.9133921       | -0.0000000     | 0.5629072             | torch.Size([2, 512, 64])         |
| 2224    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(5)               | output              | torch.float32 |         | 0.0000000         | 119.1021271      | 0.5628986      | 14.6475687            | torch.Size([2, 512, 64])         |
| 2225    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 119.1021271      | 0.5628986      | 14.6475687            | torch.Size([2, 512, 64])         |
| 2225    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(5)     | output              | torch.float32 |         | 0.0269448         | 13.7333717       | 0.5628986      | 3.7440956             | torch.Size([2, 512, 1])          |
| 2226    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(5)             | input               | torch.float32 |         | 0.0269448         | 13.7333717       | 0.5628986      | 3.7440956             | torch.Size([2, 512, 1])          |
| 2226    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(5)             | output              | torch.float32 |         | 0.2698431         | 6.0909114        | 4.1321011      | 2.9333873             | torch.Size([2, 512, 1])          |
| 2227    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(5)           | input_0             | torch.float32 |         | -2.3533511        | 10.9133921       | -0.0000000     | 0.5629072             | torch.Size([2, 512, 64])         |
| 2227    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(5)           | input_1             | torch.float32 |         | 0.2698431         | 6.0909114        | 4.1321011      | 2.9333873             | torch.Size([2, 512, 1])          |
| 2227    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(5)           | output              | torch.float32 |         | -0.9073625        | 4.0122776        | -0.0000000     | 0.9998152             | torch.Size([2, 512, 64])         |
| 2228    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(5)      | input               | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 2228    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(5)      | output              | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 2229    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(5)        | input_0             | torch.float32 |         | -0.9073625        | 4.0122776        | -0.0000000     | 0.9998152             | torch.Size([2, 512, 64])         |
| 2229    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(5)        | input_1             | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 2229    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(5)        | output              | torch.float32 |         | -1.0236218        | 3.9895618        | 0.0115440      | 0.9588332             | torch.Size([2, 512, 64])         |
| 2230    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(5)        | input               | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 2230    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(5)        | output              | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 2231    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(5)          | input_0             | torch.float32 |         | -1.0236218        | 3.9895618        | 0.0115440      | 0.9588332             | torch.Size([2, 512, 64])         |
| 2231    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(5)          | input_1             | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 2231    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(5)          | output              | torch.float32 |         | -1.0170411        | 3.9615018        | 0.0419980      | 0.8798733             | torch.Size([2, 512, 64])         |
| 2232    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(5)                   | input               | torch.float32 |         | -1.0170411        | 3.9615018        | 0.0419980      | 0.8798733             | torch.Size([2, 512, 64])         |
| 2232    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(5)                   | weight              | torch.float32 |         | -0.4523612        | 0.4813256        | -0.0014562     | 0.0096743             | torch.Size([64, 64])             |
| 2232    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(5)                   | bias                | torch.float32 |         | -0.1183558        | 0.2243176        | 0.0150283      | 0.0049289             | torch.Size([64])                 |
| 2232    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(5)                   | output              | torch.float32 |         | -5.3243456        | 4.1271443        | -0.3656511     | 2.1365447             | torch.Size([2, 512, 64])         |
| 2233    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(5)                   | input               | torch.float32 |         | 0.0000000         | 4.1271443        | 0.3617655      | 0.2884975             | torch.Size([2, 512, 64])         |
| 2233    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(5)                   | output              | torch.float32 |         | 0.0000000         | 4.1271443        | 0.3617655      | 0.2884975             | torch.Size([2, 512, 64])         |
| 2234    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 4.1271443        | 0.3617655      | 0.2884975             | torch.Size([2, 512, 64])         |
| 2234    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(5)   | output              | torch.float32 |         | 0.2156723         | 0.6726823        | 0.3617655      | 0.0137075             | torch.Size([2, 512, 1])          |
| 2235    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 4.1271443        | 0.3617655      | 0.2884975             | torch.Size([2, 512, 64])         |
| 2235    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(5)               | input_1             | torch.float32 |         | 0.2156723         | 0.6726823        | 0.3617655      | 0.0137075             | torch.Size([2, 512, 1])          |
| 2235    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(5)               | output              | torch.float32 |         | -0.6726823        | 3.5455959        | -0.0000000     | 0.2748032             | torch.Size([2, 512, 64])         |
| 2236    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(5)               | input_0             | torch.float32 |         | -0.6726823        | 3.5455959        | -0.0000000     | 0.2748032             | torch.Size([2, 512, 64])         |
| 2236    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(5)               | input_1             | torch.float32 |         | -0.6726823        | 3.5455959        | -0.0000000     | 0.2748032             | torch.Size([2, 512, 64])         |
| 2236    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(5)               | output              | torch.float32 |         | 0.0000000         | 12.5712500       | 0.2747989      | 0.3995290             | torch.Size([2, 512, 64])         |
| 2237    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 12.5712500       | 0.2747989      | 0.3995290             | torch.Size([2, 512, 64])         |
| 2237    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(5)     | output              | torch.float32 |         | 0.0841746         | 0.9820077        | 0.2747989      | 0.0311831             | torch.Size([2, 512, 1])          |
| 2238    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(5)             | input               | torch.float32 |         | 0.0841746         | 0.9820077        | 0.2747989      | 0.0311831             | torch.Size([2, 512, 1])          |
| 2238    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(5)             | output              | torch.float32 |         | 1.0091143         | 3.4465425        | 2.2347701      | 0.5659812             | torch.Size([2, 512, 1])          |
| 2239    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(5)           | input_0             | torch.float32 |         | -0.6726823        | 3.5455959        | -0.0000000     | 0.2748032             | torch.Size([2, 512, 64])         |
| 2239    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(5)           | input_1             | torch.float32 |         | 1.0091143         | 3.4465425        | 2.2347701      | 0.5659812             | torch.Size([2, 512, 1])          |
| 2239    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(5)           | output              | torch.float32 |         | -0.8848578        | 4.2550769        | 0.0000000      | 0.9999597             | torch.Size([2, 512, 64])         |
| 2240    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(5)      | input               | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 2240    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(5)      | output              | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 2241    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(5)        | input_0             | torch.float32 |         | -0.8848578        | 4.2550769        | 0.0000000      | 0.9999597             | torch.Size([2, 512, 64])         |
| 2241    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(5)        | input_1             | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 2241    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(5)        | output              | torch.float32 |         | -0.9465551        | 4.1793761        | 0.0050036      | 0.9874967             | torch.Size([2, 512, 64])         |
| 2242    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(5)        | input               | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 2242    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(5)        | output              | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 2243    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(5)          | input_0             | torch.float32 |         | -0.9465551        | 4.1793761        | 0.0050036      | 0.9874967             | torch.Size([2, 512, 64])         |
| 2243    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(5)          | input_1             | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 2243    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(5)          | output              | torch.float32 |         | -0.9398243        | 4.1660519        | 0.0214979      | 0.9495925             | torch.Size([2, 512, 64])         |
| 2244    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(5)                   | input               | torch.float32 |         | -0.9398243        | 4.1660519        | 0.0214979      | 0.9495925             | torch.Size([2, 512, 64])         |
| 2244    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(5)                   | weight              | torch.float32 |         | -0.5707353        | 0.3620123        | -0.0010372     | 0.0088292             | torch.Size([64, 64])             |
| 2244    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(5)                   | bias                | torch.float32 |         | -0.1720246        | 0.1340137        | -0.0235144     | 0.0050507             | torch.Size([64])                 |
| 2244    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(5)                   | output              | torch.float32 |         | -5.3284893        | 3.7095654        | -0.2886074     | 1.9943168             | torch.Size([2, 512, 64])         |
| 2245    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(5)                   | input               | torch.float32 |         | 0.0000000         | 3.7095654        | 0.4463516      | 0.4885361             | torch.Size([2, 512, 64])         |
| 2245    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(5)                   | output              | torch.float32 |         | 0.0000000         | 3.7095654        | 0.4463516      | 0.4885361             | torch.Size([2, 512, 64])         |
| 2246    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(5)   | input_0             | torch.float32 |         | 0.0000000         | 3.7095654        | 0.4463516      | 0.4885361             | torch.Size([2, 512, 64])         |
| 2246    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(5)   | output              | torch.float32 |         | 0.3263818         | 0.5312533        | 0.4463516      | 0.0020696             | torch.Size([2, 512, 1])          |
| 2247    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(5)               | input_0             | torch.float32 |         | 0.0000000         | 3.7095654        | 0.4463516      | 0.4885361             | torch.Size([2, 512, 64])         |
| 2247    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(5)               | input_1             | torch.float32 |         | 0.3263818         | 0.5312533        | 0.4463516      | 0.0020696             | torch.Size([2, 512, 1])          |
| 2247    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(5)               | output              | torch.float32 |         | -0.5312533        | 3.2200687        | 0.0000000      | 0.4864686             | torch.Size([2, 512, 64])         |
| 2248    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(5)               | input_0             | torch.float32 |         | -0.5312533        | 3.2200687        | 0.0000000      | 0.4864686             | torch.Size([2, 512, 64])         |
| 2248    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(5)               | input_1             | torch.float32 |         | -0.5312533        | 3.2200687        | 0.0000000      | 0.4864686             | torch.Size([2, 512, 64])         |
| 2248    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(5)               | output              | torch.float32 |         | 0.0000000         | 10.3688421       | 0.4864611      | 0.9779148             | torch.Size([2, 512, 64])         |
| 2249    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(5)     | input_0             | torch.float32 |         | 0.0000000         | 10.3688421       | 0.4864611      | 0.9779148             | torch.Size([2, 512, 64])         |
| 2249    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(5)     | output              | torch.float32 |         | 0.2724177         | 0.7248777        | 0.4864611      | 0.0094920             | torch.Size([2, 512, 1])          |
| 2250    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(5)             | input               | torch.float32 |         | 0.2724177         | 0.7248777        | 0.4864611      | 0.0094920             | torch.Size([2, 512, 1])          |
| 2250    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(5)             | output              | torch.float32 |         | 1.1745313         | 1.9159065        | 1.4562463      | 0.0226205             | torch.Size([2, 512, 1])          |
| 2251    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(5)           | input_0             | torch.float32 |         | -0.5312533        | 3.2200687        | 0.0000000      | 0.4864686             | torch.Size([2, 512, 64])         |
| 2251    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(5)           | input_1             | torch.float32 |         | 1.1745313         | 1.9159065        | 1.4562463      | 0.0226205             | torch.Size([2, 512, 1])          |
| 2251    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(5)           | output              | torch.float32 |         | -0.7291731        | 4.1572018        | -0.0000000     | 0.9999939             | torch.Size([2, 512, 64])         |
| 2252    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(5)      | input               | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 2252    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(5)      | output              | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 2253    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(5)        | input_0             | torch.float32 |         | -0.7291731        | 4.1572018        | -0.0000000     | 0.9999939             | torch.Size([2, 512, 64])         |
| 2253    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(5)        | input_1             | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 2253    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(5)        | output              | torch.float32 |         | -0.8030215        | 4.4005365        | 0.0058439      | 0.9990116             | torch.Size([2, 512, 64])         |
| 2254    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(5)        | input               | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 2254    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(5)        | output              | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 2255    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(5)          | input_0             | torch.float32 |         | -0.8030215        | 4.4005365        | 0.0058439      | 0.9990116             | torch.Size([2, 512, 64])         |
| 2255    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(5)          | input_1             | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 2255    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(5)          | output              | torch.float32 |         | -0.8000840        | 4.4192834        | 0.0191267      | 0.9841654             | torch.Size([2, 512, 64])         |
| 2256    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(5)                   | input               | torch.float32 |         | -0.8000840        | 4.4192834        | 0.0191267      | 0.9841654             | torch.Size([2, 512, 64])         |
| 2256    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(5)                   | weight              | torch.float32 |         | -0.5701389        | 0.3477888        | 0.0006721      | 0.0085883             | torch.Size([64, 64])             |
| 2256    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(5)                   | bias                | torch.float32 |         | -0.1677032        | 0.1709885        | -0.0237130     | 0.0070098             | torch.Size([64])                 |
| 2256    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(5)                   | output              | torch.float32 |         | -4.7874441        | 7.2180457        | -0.4089854     | 1.6490266             | torch.Size([2, 512, 64])         |
| 2257    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(5)                  | input               | torch.float32 |         | 0.0000000         | 7.2180457        | 0.2808695      | 0.5988934             | torch.Size([2, 512, 64])         |
| 2257    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(5)                  | output              | torch.float32 |         | 0.0000000         | 7.2180457        | 0.2808695      | 0.5988934             | torch.Size([2, 512, 64])         |
| 2258    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(5)  | input_0             | torch.float32 |         | 0.0000000         | 7.2180457        | 0.2808695      | 0.5988934             | torch.Size([2, 512, 64])         |
| 2258    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(5)  | output              | torch.float32 |         | 0.2038582         | 0.4014210        | 0.2808695      | 0.0030908             | torch.Size([2, 512, 1])          |
| 2259    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(5)              | input_0             | torch.float32 |         | 0.0000000         | 7.2180457        | 0.2808695      | 0.5988934             | torch.Size([2, 512, 64])         |
| 2259    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(5)              | input_1             | torch.float32 |         | 0.2038582         | 0.4014210        | 0.2808695      | 0.0030908             | torch.Size([2, 512, 1])          |
| 2259    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(5)              | output              | torch.float32 |         | -0.4014210        | 7.0134902        | 0.0000000      | 0.5958056             | torch.Size([2, 512, 64])         |
| 2260    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(5)              | input_0             | torch.float32 |         | -0.4014210        | 7.0134902        | 0.0000000      | 0.5958056             | torch.Size([2, 512, 64])         |
| 2260    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(5)              | input_1             | torch.float32 |         | -0.4014210        | 7.0134902        | 0.0000000      | 0.5958056             | torch.Size([2, 512, 64])         |
| 2260    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(5)              | output              | torch.float32 |         | 0.0000000         | 49.1890450       | 0.5957965      | 14.9200459            | torch.Size([2, 512, 64])         |
| 2261    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(5)    | input_0             | torch.float32 |         | 0.0000000         | 49.1890450       | 0.5957965      | 14.9200459            | torch.Size([2, 512, 64])         |
| 2261    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(5)    | output              | torch.float32 |         | 0.1985897         | 0.8293260        | 0.5957965      | 0.0340033             | torch.Size([2, 512, 1])          |
| 2262    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(5)            | input               | torch.float32 |         | 0.1985897         | 0.8293260        | 0.5957965      | 0.0340033             | torch.Size([2, 512, 1])          |
| 2262    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(5)            | output              | torch.float32 |         | 1.0980819         | 2.2439370        | 1.3656806      | 0.0908390             | torch.Size([2, 512, 1])          |
| 2263    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(5)          | input_0             | torch.float32 |         | -0.4014210        | 7.0134902        | 0.0000000      | 0.5958056             | torch.Size([2, 512, 64])         |
| 2263    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(5)          | input_1             | torch.float32 |         | 1.0980819         | 2.2439370        | 1.3656806      | 0.0908390             | torch.Size([2, 512, 1])          |
| 2263    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(5)          | output              | torch.float32 |         | -0.7157668        | 7.7588320        | -0.0000000     | 0.9999956             | torch.Size([2, 512, 64])         |
| 2264    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(5)     | input               | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 2264    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(5)     | output              | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 2265    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(5)       | input_0             | torch.float32 |         | -0.7157668        | 7.7588320        | -0.0000000     | 0.9999956             | torch.Size([2, 512, 64])         |
| 2265    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(5)       | input_1             | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 2265    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(5)       | output              | torch.float32 |         | -0.8930526        | 5.6617460        | -0.0223466     | 0.7933782             | torch.Size([2, 512, 64])         |
| 2266    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(5)       | input               | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 2266    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(5)       | output              | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 2267    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(5)         | input_0             | torch.float32 |         | -0.8930526        | 5.6617460        | -0.0223466     | 0.7933782             | torch.Size([2, 512, 64])         |
| 2267    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(5)         | input_1             | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 2267    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(5)         | output              | torch.float32 |         | -0.8653387        | 5.6975260        | 0.0676588      | 0.7318357             | torch.Size([2, 512, 64])         |
| 2268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_0             | torch.float32 |         | -0.8304361        | 7.3573351        | 0.0719146      | 0.8676326             | torch.Size([2, 512, 128])        |
| 2268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_1             | torch.float32 |         | -1.7250781        | 5.1119285        | 0.0195334      | 1.3801482             | torch.Size([2, 512, 32])         |
| 2268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_2             | torch.float32 |         | -0.9682196        | 5.3816223        | 0.0232236      | 0.8359140             | torch.Size([2, 512, 32])         |
| 2268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | input_3             | torch.float32 |         | -0.8653387        | 5.6975260        | 0.0676588      | 0.7318357             | torch.Size([2, 512, 64])         |
| 2268    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(5)                        | output              | torch.float32 |         | -1.7250781        | 7.3573351        | 0.0582166      | 0.8942280             | torch.Size([2, 512, 256])        |
| 2269    | torch.nn.modules.linear.Linear                                                    | head.fc_before(8)                                 | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 2269    | torch.nn.modules.linear.Linear                                                    | head.fc_before(8)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 2269    | torch.nn.modules.linear.Linear                                                    | head.fc_before(8)                                 | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 2270    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.28.query_cat                          | input_0             | torch.float32 |         | -4.8112426        | 3.4337044        | 0.0028188      | 0.8370261             | torch.Size([2, 512, 256])        |
| 2270    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.28.query_cat                          | input_1             | torch.float32 |         | -1.7250781        | 7.3573351        | 0.0582166      | 0.8942280             | torch.Size([2, 512, 256])        |
| 2270    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.28.query_cat                          | output              | torch.float32 |         | -4.8112426        | 7.3573351        | 0.0305177      | 0.8663927             | torch.Size([2, 512, 512])        |
| 2271    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.28.key_cat                            | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 2271    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.28.key_cat                            | input_1             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0508909      | 0.8514420             | torch.Size([2, 256, 256])        |
| 2271    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.28.key_cat                            | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 2272    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -4.8112426        | 7.3573351        | 0.0305177      | 0.8663927             | torch.Size([2, 512, 512])        |
| 2272    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -4.8112426        | 7.3573351        | 0.0305177      | 0.8663927             | torch.Size([512, 2, 512])        |
| 2273    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 2273    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2274    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 2274    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2275    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | input_0             | torch.float32 |         | -4.8112426        | 7.3573351        | 0.0305177      | 0.8663927             | torch.Size([512, 2, 512])        |
| 2275    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | output              | torch.float32 |         | -4.8112426        | 7.3573351        | 0.0305177      | 0.8663927             | torch.Size([512, 2, 512])        |
| 2276    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2276    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2277    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2277    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2278    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.q_proj                        | input               | torch.float32 |         | -4.8112426        | 7.3573351        | 0.0305177      | 0.8663927             | torch.Size([512, 2, 512])        |
| 2278    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.q_proj                        | weight              | torch.float32 |         | -0.4073947        | 0.3189994        | 0.0001346      | 0.0033978             | torch.Size([512, 512])           |
| 2278    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.q_proj                        | bias                | torch.float32 |         | -0.0915100        | 0.0791734        | -0.0000095     | 0.0008503             | torch.Size([512])                |
| 2278    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.q_proj                        | output              | torch.float32 |         | -15.1005812       | 13.0722227       | 0.0543694      | 9.6901884             | torch.Size([512, 2, 512])        |
| 2279    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.k_proj                        | input               | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2279    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.k_proj                        | weight              | torch.float32 |         | -0.4692126        | 0.5299173        | -0.0000477     | 0.0036618             | torch.Size([512, 512])           |
| 2279    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.k_proj                        | bias                | torch.float32 |         | -0.0043523        | 0.0039338        | -0.0000140     | 0.0000007             | torch.Size([512])                |
| 2279    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.k_proj                        | output              | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([256, 2, 512])        |
| 2280    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.v_proj                        | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2280    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.v_proj                        | weight              | torch.float32 |         | -0.3048484        | 0.3328977        | -0.0000697     | 0.0014966             | torch.Size([512, 512])           |
| 2280    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.v_proj                        | bias                | torch.float32 |         | -0.0813287        | 0.0743355        | -0.0004657     | 0.0005773             | torch.Size([512])                |
| 2280    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.v_proj                        | output              | torch.float32 |         | -0.0813287        | 0.0743355        | -0.0004657     | 0.0005761             | torch.Size([256, 2, 512])        |
| 2281    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | input_0             | torch.float32 |         | -15.1005812       | 13.0722227       | 0.0543694      | 9.6901884             | torch.Size([512, 2, 512])        |
| 2281    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | output              | torch.float32 |         | -15.1005812       | 13.0722227       | 0.0543694      | 9.6901884             | torch.Size([512, 16, 64])        |
| 2282    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -15.1005812       | 13.0722227       | 0.0543694      | 9.6901884             | torch.Size([512, 16, 64])        |
| 2282    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -15.1005812       | 13.0722227       | 0.0543694      | 9.6901884             | torch.Size([16, 512, 64])        |
| 2283    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | input_0             | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([256, 2, 512])        |
| 2283    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | output              | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([256, 16, 64])        |
| 2284    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([256, 16, 64])        |
| 2284    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([16, 256, 64])        |
| 2285    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | input_0             | torch.float32 |         | -0.0813287        | 0.0743355        | -0.0004657     | 0.0005761             | torch.Size([256, 2, 512])        |
| 2285    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | output              | torch.float32 |         | -0.0813287        | 0.0743355        | -0.0004657     | 0.0005761             | torch.Size([256, 16, 64])        |
| 2286    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -0.0813287        | 0.0743355        | -0.0004657     | 0.0005761             | torch.Size([256, 16, 64])        |
| 2286    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -0.0813287        | 0.0743355        | -0.0004657     | 0.0005761             | torch.Size([16, 256, 64])        |
| 2287    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.28.attn.q_scale_mul                   | input_0             | torch.float32 |         | -15.1005812       | 13.0722227       | 0.0543694      | 9.6901884             | torch.Size([16, 512, 64])        |
| 2287    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.28.attn.q_scale_mul                   | output              | torch.float32 |         | -1.8875726        | 1.6340278        | 0.0067962      | 0.1514092             | torch.Size([16, 512, 64])        |
| 2288    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([16, 256, 64])        |
| 2288    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([16, 64, 256])        |
| 2289    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.28.attn.matmul                        | input_0             | torch.float32 |         | -1.8875726        | 1.6340278        | 0.0067962      | 0.1514092             | torch.Size([16, 512, 64])        |
| 2289    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.28.attn.matmul                        | input_1             | torch.float32 |         | -6.2245770        | 4.8604660        | -0.0389154     | 3.9658709             | torch.Size([16, 64, 256])        |
| 2289    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.28.attn.matmul                        | output              | torch.float32 |         | -79.4097672       | 96.6544342       | 0.9085908      | 656.6485596           | torch.Size([16, 512, 256])       |
| 2290    | torch.Tensor.max                                                                  | head.layers.28.attn.softmax                       | input               | torch.float32 |         | -79.4097672       | 96.6544342       | 0.9085908      | 656.6485596           | torch.Size([16, 512, 256])       |
| 2290    | torch.Tensor.max                                                                  | head.layers.28.attn.softmax                       | output_0            | torch.float32 |         | -79.4097672       | 96.6544342       | 0.9085908      | 656.7283325           | torch.Size([16, 512, 1])         |
| 2290    | torch.Tensor.max                                                                  | head.layers.28.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 1])         |
| 2291    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.28.attn.softmax.sub                   | input_0             | torch.float32 |         | -79.4097672       | 96.6544342       | 0.9085908      | 656.6485596           | torch.Size([16, 512, 256])       |
| 2291    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.28.attn.softmax.sub                   | input_1             | torch.float32 |         | -79.4097672       | 96.6544342       | 0.9085908      | 656.7283325           | torch.Size([16, 512, 1])         |
| 2291    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.28.attn.softmax.sub                   | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2292    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.28.attn.softmax.exp                   | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2292    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.28.attn.softmax.exp                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2293    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.28.attn.softmax.sum                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2293    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.28.attn.softmax.sum                   | output              | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 2294    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.28.attn.softmax.reciprocal            | input               | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 2294    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.28.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 2295    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.28.attn.softmax.mul                   | input_0             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2295    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.28.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 2295    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.28.attn.softmax.mul                   | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2296    | torch.nn.modules.dropout.Dropout                                                  | head.layers.28.attn.attention_drop                | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2296    | torch.nn.modules.dropout.Dropout                                                  | head.layers.28.attn.attention_drop                | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2297    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.28.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2297    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.28.attn.attn_matmul                   | input_1             | torch.float32 |         | -0.0813287        | 0.0743355        | -0.0004657     | 0.0005761             | torch.Size([16, 256, 64])        |
| 2297    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.28.attn.attn_matmul                   | output              | torch.float32 |         | -0.0813288        | 0.0743353        | -0.0004657     | 0.0005761             | torch.Size([16, 512, 64])        |
| 2298    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -0.0813288        | 0.0743353        | -0.0004657     | 0.0005761             | torch.Size([16, 512, 64])        |
| 2298    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -0.0813288        | 0.0743353        | -0.0004657     | 0.0005761             | torch.Size([512, 16, 64])        |
| 2299    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | input_0             | torch.float32 |         | -0.0813288        | 0.0743353        | -0.0004657     | 0.0005761             | torch.Size([512, 16, 64])        |
| 2299    | torch.Tensor.reshape                                                              | head.layers.28.attn                               | output              | torch.float32 |         | -0.0813288        | 0.0743353        | -0.0004657     | 0.0005761             | torch.Size([512, 2, 512])        |
| 2300    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.out_proj                      | input               | torch.float32 |         | -0.0813288        | 0.0743353        | -0.0004657     | 0.0005761             | torch.Size([512, 2, 512])        |
| 2300    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.out_proj                      | weight              | torch.float32 |         | -0.2395778        | 0.2118238        | -0.0001136     | 0.0023239             | torch.Size([512, 512])           |
| 2300    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.out_proj                      | bias                | torch.float32 |         | -0.2437576        | 0.2574523        | 0.0090795      | 0.0067918             | torch.Size([512])                |
| 2300    | torch.nn.modules.linear.Linear                                                    | head.layers.28.attn.out_proj                      | output              | torch.float32 |         | -0.4184768        | 0.3247609        | 0.0101470      | 0.0132138             | torch.Size([512, 2, 512])        |
| 2301    | torch.Tensor.view                                                                 | head.layers.28.attn                               | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2301    | torch.Tensor.view                                                                 | head.layers.28.attn                               | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 2302    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.28.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 2302    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.28.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 512, 256])        |
| 2303    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | input_0             | torch.float32 |         | -0.4184768        | 0.3247609        | 0.0101470      | 0.0132138             | torch.Size([512, 2, 512])        |
| 2303    | torch.Tensor.transpose                                                            | head.layers.28.attn                               | output              | torch.float32 |         | -0.4184768        | 0.3247609        | 0.0101470      | 0.0132138             | torch.Size([2, 512, 512])        |
| 2304    | torch.nn.modules.dropout.Dropout                                                  | head.layers.28.dropout                            | input               | torch.float32 |         | -0.4184768        | 0.3247609        | 0.0101470      | 0.0132138             | torch.Size([2, 512, 512])        |
| 2304    | torch.nn.modules.dropout.Dropout                                                  | head.layers.28.dropout                            | output              | torch.float32 |         | -0.4184768        | 0.3247609        | 0.0101470      | 0.0132138             | torch.Size([2, 512, 512])        |
| 2305    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.28.add                                | input_0             | torch.float32 |         | -4.8112426        | 7.3573351        | 0.0305177      | 0.8663927             | torch.Size([2, 512, 512])        |
| 2305    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.28.add                                | input_1             | torch.float32 |         | -0.4184768        | 0.3247609        | 0.0101470      | 0.0132138             | torch.Size([2, 512, 512])        |
| 2305    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.28.add                                | output              | torch.float32 |         | -4.6114211        | 7.1648636        | 0.0406647      | 0.8334846             | torch.Size([2, 512, 512])        |
| 2306    | torch.nn.modules.linear.Linear                                                    | head.fc_after(8)                                  | input               | torch.float32 |         | -4.6114211        | 7.1648636        | 0.0406647      | 0.8334846             | torch.Size([2, 512, 512])        |
| 2306    | torch.nn.modules.linear.Linear                                                    | head.fc_after(8)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 2306    | torch.nn.modules.linear.Linear                                                    | head.fc_after(8)                                  | output              | torch.float32 |         | -5.9567552        | 5.3179927        | 0.0243219      | 0.9436743             | torch.Size([2, 512, 256])        |
| 2307    | torch.nn.modules.linear.Linear                                                    | head.fc_before(9)                                 | input               | torch.float32 |         | -5.9567552        | 5.3179927        | 0.0243219      | 0.9436743             | torch.Size([2, 512, 256])        |
| 2307    | torch.nn.modules.linear.Linear                                                    | head.fc_before(9)                                 | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 2307    | torch.nn.modules.linear.Linear                                                    | head.fc_before(9)                                 | output              | torch.float32 |         | -3.6438746        | 3.1662881        | -0.0010093     | 0.0589215             | torch.Size([2, 512, 512])        |
| 2308    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.29.query_cat                          | input_0             | torch.float32 |         | -5.9567552        | 5.3179927        | 0.0243219      | 0.9436743             | torch.Size([2, 512, 256])        |
| 2308    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.29.query_cat                          | input_1             | torch.float32 |         | -1.7250781        | 7.3573351        | 0.0582166      | 0.8942280             | torch.Size([2, 512, 256])        |
| 2308    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.29.query_cat                          | output              | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([2, 512, 512])        |
| 2309    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.29.key_cat                            | input_0             | torch.float32 |         | -5.9567552        | 5.3179927        | 0.0243219      | 0.9436743             | torch.Size([2, 512, 256])        |
| 2309    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.29.key_cat                            | input_1             | torch.float32 |         | -1.7250781        | 7.3573351        | 0.0582166      | 0.8942280             | torch.Size([2, 512, 256])        |
| 2309    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.29.key_cat                            | output              | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([2, 512, 512])        |
| 2310    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([2, 512, 512])        |
| 2310    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2311    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([2, 512, 512])        |
| 2311    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2312    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -3.6438746        | 3.1662881        | -0.0010093     | 0.0589215             | torch.Size([2, 512, 512])        |
| 2312    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -3.6438746        | 3.1662881        | -0.0010093     | 0.0589215             | torch.Size([512, 2, 512])        |
| 2313    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | input_0             | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2313    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | output              | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2314    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | input_0             | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2314    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | output              | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2315    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | input_0             | torch.float32 |         | -3.6438746        | 3.1662881        | -0.0010093     | 0.0589215             | torch.Size([512, 2, 512])        |
| 2315    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | output              | torch.float32 |         | -3.6438746        | 3.1662881        | -0.0010093     | 0.0589215             | torch.Size([512, 2, 512])        |
| 2316    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.q_proj                        | input               | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2316    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.q_proj                        | weight              | torch.float32 |         | -0.3925455        | 0.4585033        | 0.0001725      | 0.0026408             | torch.Size([512, 512])           |
| 2316    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.q_proj                        | bias                | torch.float32 |         | -0.0954414        | 0.0812263        | -0.0016288     | 0.0003734             | torch.Size([512])                |
| 2316    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.q_proj                        | output              | torch.float32 |         | -8.5614042        | 11.0105572       | -0.0466576     | 2.7646449             | torch.Size([512, 2, 512])        |
| 2317    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.k_proj                        | input               | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([512, 2, 512])        |
| 2317    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.k_proj                        | weight              | torch.float32 |         | -0.6571054        | 0.6037697        | -0.0000865     | 0.0031884             | torch.Size([512, 512])           |
| 2317    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.k_proj                        | bias                | torch.float32 |         | -0.1333090        | 0.1077095        | -0.0008078     | 0.0002287             | torch.Size([512])                |
| 2317    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.k_proj                        | output              | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([512, 2, 512])        |
| 2318    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.v_proj                        | input               | torch.float32 |         | -3.6438746        | 3.1662881        | -0.0010093     | 0.0589215             | torch.Size([512, 2, 512])        |
| 2318    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.v_proj                        | weight              | torch.float32 |         | -0.2302573        | 0.2758068        | -0.0000755     | 0.0018357             | torch.Size([512, 512])           |
| 2318    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.v_proj                        | bias                | torch.float32 |         | -0.3465908        | 0.3370203        | -0.0008104     | 0.0041902             | torch.Size([512])                |
| 2318    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.v_proj                        | output              | torch.float32 |         | -4.0200305        | 3.6517630        | -0.0056628     | 0.1548815             | torch.Size([512, 2, 512])        |
| 2319    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | input_0             | torch.float32 |         | -8.5614042        | 11.0105572       | -0.0466576     | 2.7646449             | torch.Size([512, 2, 512])        |
| 2319    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | output              | torch.float32 |         | -8.5614042        | 11.0105572       | -0.0466576     | 2.7646449             | torch.Size([512, 16, 64])        |
| 2320    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -8.5614042        | 11.0105572       | -0.0466576     | 2.7646449             | torch.Size([512, 16, 64])        |
| 2320    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -8.5614042        | 11.0105572       | -0.0466576     | 2.7646449             | torch.Size([16, 512, 64])        |
| 2321    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | input_0             | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([512, 2, 512])        |
| 2321    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | output              | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([512, 16, 64])        |
| 2322    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([512, 16, 64])        |
| 2322    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([16, 512, 64])        |
| 2323    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | input_0             | torch.float32 |         | -4.0200305        | 3.6517630        | -0.0056628     | 0.1548815             | torch.Size([512, 2, 512])        |
| 2323    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | output              | torch.float32 |         | -4.0200305        | 3.6517630        | -0.0056628     | 0.1548815             | torch.Size([512, 16, 64])        |
| 2324    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -4.0200305        | 3.6517630        | -0.0056628     | 0.1548815             | torch.Size([512, 16, 64])        |
| 2324    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -4.0200305        | 3.6517630        | -0.0056628     | 0.1548815             | torch.Size([16, 512, 64])        |
| 2325    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.29.attn.q_scale_mul                   | input_0             | torch.float32 |         | -8.5614042        | 11.0105572       | -0.0466576     | 2.7646449             | torch.Size([16, 512, 64])        |
| 2325    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.29.attn.q_scale_mul                   | output              | torch.float32 |         | -1.0701755        | 1.3763196        | -0.0058322     | 0.0431976             | torch.Size([16, 512, 64])        |
| 2326    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([16, 512, 64])        |
| 2326    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([16, 64, 512])        |
| 2327    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.29.attn.matmul                        | input_0             | torch.float32 |         | -1.0701755        | 1.3763196        | -0.0058322     | 0.0431976             | torch.Size([16, 512, 64])        |
| 2327    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.29.attn.matmul                        | input_1             | torch.float32 |         | -13.2671824       | 13.7980032       | 0.0394268      | 4.5400257             | torch.Size([16, 64, 512])        |
| 2327    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.29.attn.matmul                        | output              | torch.float32 |         | -75.6020813       | 87.8002625       | -1.0190628     | 157.0408325           | torch.Size([16, 512, 512])       |
| 2328    | torch.Tensor.max                                                                  | head.layers.29.attn.softmax                       | input               | torch.float32 |         | -75.6020813       | 87.8002625       | -1.0190628     | 157.0408325           | torch.Size([16, 512, 512])       |
| 2328    | torch.Tensor.max                                                                  | head.layers.29.attn.softmax                       | output_0            | torch.float32 |         | 0.9382647         | 87.8002625       | 19.0137291     | 306.4676514           | torch.Size([16, 512, 1])         |
| 2328    | torch.Tensor.max                                                                  | head.layers.29.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 511.0000000      | 299.8914795    | 13978.5332031         | torch.Size([16, 512, 1])         |
| 2329    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.29.attn.softmax.sub                   | input_0             | torch.float32 |         | -75.6020813       | 87.8002625       | -1.0190628     | 157.0408325           | torch.Size([16, 512, 512])       |
| 2329    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.29.attn.softmax.sub                   | input_1             | torch.float32 |         | 0.9382647         | 87.8002625       | 19.0137291     | 306.4676514           | torch.Size([16, 512, 1])         |
| 2329    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.29.attn.softmax.sub                   | output              | torch.float32 |         | -151.4549255      | 0.0000000        | -20.0327911    | 470.3971558           | torch.Size([16, 512, 512])       |
| 2330    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.29.attn.softmax.exp                   | input               | torch.float32 |         | -151.4549255      | 0.0000000        | -20.0327911    | 470.3971558           | torch.Size([16, 512, 512])       |
| 2330    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.29.attn.softmax.exp                   | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0308155      | 0.0161183             | torch.Size([16, 512, 512])       |
| 2331    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.29.attn.softmax.sum                   | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0308155      | 0.0161183             | torch.Size([16, 512, 512])       |
| 2331    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.29.attn.softmax.sum                   | output              | torch.float32 |         | 1.0000000         | 128.1792450      | 15.7775345     | 619.8306274           | torch.Size([16, 512, 1])         |
| 2332    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.29.attn.softmax.reciprocal            | input               | torch.float32 |         | 1.0000000         | 128.1792450      | 15.7775345     | 619.8306274           | torch.Size([16, 512, 1])         |
| 2332    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.29.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0078016         | 1.0000000        | 0.2207657      | 0.0582946             | torch.Size([16, 512, 1])         |
| 2333    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.29.attn.softmax.mul                   | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0308155      | 0.0161183             | torch.Size([16, 512, 512])       |
| 2333    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.29.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0078016         | 1.0000000        | 0.2207657      | 0.0582946             | torch.Size([16, 512, 1])         |
| 2333    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.29.attn.softmax.mul                   | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0002838             | torch.Size([16, 512, 512])       |
| 2334    | torch.nn.modules.dropout.Dropout                                                  | head.layers.29.attn.attention_drop                | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0002838             | torch.Size([16, 512, 512])       |
| 2334    | torch.nn.modules.dropout.Dropout                                                  | head.layers.29.attn.attention_drop                | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0002838             | torch.Size([16, 512, 512])       |
| 2335    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.29.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0002838             | torch.Size([16, 512, 512])       |
| 2335    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.29.attn.attn_matmul                   | input_1             | torch.float32 |         | -4.0200305        | 3.6517630        | -0.0056628     | 0.1548815             | torch.Size([16, 512, 64])        |
| 2335    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.29.attn.attn_matmul                   | output              | torch.float32 |         | -2.8560112        | 2.7665956        | -0.0069019     | 0.1109026             | torch.Size([16, 512, 64])        |
| 2336    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -2.8560112        | 2.7665956        | -0.0069019     | 0.1109026             | torch.Size([16, 512, 64])        |
| 2336    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -2.8560112        | 2.7665956        | -0.0069019     | 0.1109026             | torch.Size([512, 16, 64])        |
| 2337    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | input_0             | torch.float32 |         | -2.8560112        | 2.7665956        | -0.0069019     | 0.1109026             | torch.Size([512, 16, 64])        |
| 2337    | torch.Tensor.reshape                                                              | head.layers.29.attn                               | output              | torch.float32 |         | -2.8560112        | 2.7665956        | -0.0069019     | 0.1109026             | torch.Size([512, 2, 512])        |
| 2338    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.out_proj                      | input               | torch.float32 |         | -2.8560112        | 2.7665956        | -0.0069019     | 0.1109026             | torch.Size([512, 2, 512])        |
| 2338    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.out_proj                      | weight              | torch.float32 |         | -0.2557875        | 0.2624706        | -0.0000386     | 0.0028310             | torch.Size([512, 512])           |
| 2338    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.out_proj                      | bias                | torch.float32 |         | -0.4021156        | 0.3647011        | -0.0051460     | 0.0224833             | torch.Size([512])                |
| 2338    | torch.nn.modules.linear.Linear                                                    | head.layers.29.attn.out_proj                      | output              | torch.float32 |         | -4.5179634        | 3.0927069        | -0.0387644     | 0.4664755             | torch.Size([512, 2, 512])        |
| 2339    | torch.Tensor.view                                                                 | head.layers.29.attn                               | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0002838             | torch.Size([16, 512, 512])       |
| 2339    | torch.Tensor.view                                                                 | head.layers.29.attn                               | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0002838             | torch.Size([2, 8, 512, 512])     |
| 2340    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.29.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0019531      | 0.0002838             | torch.Size([2, 8, 512, 512])     |
| 2340    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.29.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0000002         | 0.2152505        | 0.0019531      | 0.0000387             | torch.Size([2, 512, 512])        |
| 2341    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | input_0             | torch.float32 |         | -4.5179634        | 3.0927069        | -0.0387644     | 0.4664755             | torch.Size([512, 2, 512])        |
| 2341    | torch.Tensor.transpose                                                            | head.layers.29.attn                               | output              | torch.float32 |         | -4.5179634        | 3.0927069        | -0.0387644     | 0.4664755             | torch.Size([2, 512, 512])        |
| 2342    | torch.nn.modules.dropout.Dropout                                                  | head.layers.29.dropout                            | input               | torch.float32 |         | -4.5179634        | 3.0927069        | -0.0387644     | 0.4664755             | torch.Size([2, 512, 512])        |
| 2342    | torch.nn.modules.dropout.Dropout                                                  | head.layers.29.dropout                            | output              | torch.float32 |         | -4.5179634        | 3.0927069        | -0.0387644     | 0.4664755             | torch.Size([2, 512, 512])        |
| 2343    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.29.add                                | input_0             | torch.float32 |         | -5.9567552        | 7.3573351        | 0.0412693      | 0.9192367             | torch.Size([2, 512, 512])        |
| 2343    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.29.add                                | input_1             | torch.float32 |         | -4.5179634        | 3.0927069        | -0.0387644     | 0.4664755             | torch.Size([2, 512, 512])        |
| 2343    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.29.add                                | output              | torch.float32 |         | -8.3181572        | 8.4516039        | 0.0025049      | 1.3599186             | torch.Size([2, 512, 512])        |
| 2344    | torch.nn.modules.linear.Linear                                                    | head.fc_after(9)                                  | input               | torch.float32 |         | -8.3181572        | 8.4516039        | 0.0025049      | 1.3599186             | torch.Size([2, 512, 512])        |
| 2344    | torch.nn.modules.linear.Linear                                                    | head.fc_after(9)                                  | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 2344    | torch.nn.modules.linear.Linear                                                    | head.fc_after(9)                                  | output              | torch.float32 |         | -54.2534142       | 37.9146996       | 0.0708129      | 15.6116667            | torch.Size([2, 512, 256])        |
| 2345    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.30.input_mean.mean                    | input_0             | torch.float32 |         | -54.2534142       | 37.9146996       | 0.0708129      | 15.6116667            | torch.Size([2, 512, 256])        |
| 2345    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.30.input_mean.mean                    | output              | torch.float32 |         | -0.0536559        | 0.2098922        | 0.0708129      | 0.0014367             | torch.Size([2, 512, 1])          |
| 2346    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.30.sub                                | input_0             | torch.float32 |         | -54.2534142       | 37.9146996       | 0.0708129      | 15.6116667            | torch.Size([2, 512, 256])        |
| 2346    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.30.sub                                | input_1             | torch.float32 |         | -0.0536559        | 0.2098922        | 0.0708129      | 0.0014367             | torch.Size([2, 512, 1])          |
| 2346    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.30.sub                                | output              | torch.float32 |         | -54.3762550       | 37.7918587       | -0.0000000     | 15.6102314            | torch.Size([2, 512, 256])        |
| 2347    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.mul                                | input_0             | torch.float32 |         | -54.3762550       | 37.7918587       | -0.0000000     | 15.6102314            | torch.Size([2, 512, 256])        |
| 2347    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.mul                                | input_1             | torch.float32 |         | -54.3762550       | 37.7918587       | -0.0000000     | 15.6102314            | torch.Size([2, 512, 256])        |
| 2347    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.mul                                | output              | torch.float32 |         | 0.0000000         | 2956.7770996     | 15.6101723     | 9034.2314453          | torch.Size([2, 512, 256])        |
| 2348    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.30.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 2956.7770996     | 15.6101723     | 9034.2314453          | torch.Size([2, 512, 256])        |
| 2348    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.30.var_mean.mean                      | output              | torch.float32 |         | 6.7954435         | 43.7647781       | 15.6101723     | 59.1452026            | torch.Size([2, 512, 1])          |
| 2349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.30.rsqrt                              | input               | torch.float32 |         | 6.7954435         | 43.7647781       | 15.6101723     | 59.1452026            | torch.Size([2, 512, 1])          |
| 2349    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.30.rsqrt                              | output              | torch.float32 |         | 0.1511602         | 0.3836108        | 0.2788079      | 0.0051343             | torch.Size([2, 512, 1])          |
| 2350    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.out_mul                            | input_0             | torch.float32 |         | -54.3762550       | 37.7918587       | -0.0000000     | 15.6102314            | torch.Size([2, 512, 256])        |
| 2350    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.out_mul                            | input_1             | torch.float32 |         | 0.1511602         | 0.3836108        | 0.2788079      | 0.0051343             | torch.Size([2, 512, 1])          |
| 2350    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.out_mul                            | output              | torch.float32 |         | -8.5737753        | 6.2620316        | -0.0000000     | 1.0000030             | torch.Size([2, 512, 256])        |
| 2351    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.30.weight_quant                       | input               | torch.float32 |         | 0.7288531         | 1.0363919        | 0.8788871      | 0.0022640             | torch.Size([256])                |
| 2351    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.30.weight_quant                       | output              | torch.float32 |         | 0.7288531         | 1.0363919        | 0.8788871      | 0.0022640             | torch.Size([256])                |
| 2352    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.weight_mul                         | input_0             | torch.float32 |         | -8.5737753        | 6.2620316        | -0.0000000     | 1.0000030             | torch.Size([2, 512, 256])        |
| 2352    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.weight_mul                         | input_1             | torch.float32 |         | 0.7288531         | 1.0363919        | 0.8788871      | 0.0022640             | torch.Size([256])                |
| 2352    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.30.weight_mul                         | output              | torch.float32 |         | -6.4284468        | 5.1064262        | 0.0023570      | 0.6684917             | torch.Size([2, 512, 256])        |
| 2353    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.30.bias_quant                         | input               | torch.float32 |         | -0.1932694        | 0.2182894        | -0.0024702     | 0.0023584             | torch.Size([256])                |
| 2353    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.30.bias_quant                         | output              | torch.float32 |         | -0.1932694        | 0.2182894        | -0.0024702     | 0.0023584             | torch.Size([256])                |
| 2354    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.30.bias_add                           | input_0             | torch.float32 |         | -6.4284468        | 5.1064262        | 0.0023570      | 0.6684917             | torch.Size([2, 512, 256])        |
| 2354    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.30.bias_add                           | input_1             | torch.float32 |         | -0.1932694        | 0.2182894        | -0.0024702     | 0.0023584             | torch.Size([256])                |
| 2354    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.30.bias_add                           | output              | torch.float32 |         | -6.2538772        | 4.9512348        | -0.0001131     | 0.6297012             | torch.Size([2, 512, 256])        |
| 2355    | torch.nn.modules.linear.Linear                                                    | head.layers.31.kps_generator.offset               | input               | torch.float32 |         | -6.2538772        | 4.9512348        | -0.0001131     | 0.6297012             | torch.Size([2, 512, 256])        |
| 2355    | torch.nn.modules.linear.Linear                                                    | head.layers.31.kps_generator.offset               | weight              | torch.float32 |         | -0.1990188        | 0.2361899        | -0.0012109     | 0.0039983             | torch.Size([24, 256])            |
| 2355    | torch.nn.modules.linear.Linear                                                    | head.layers.31.kps_generator.offset               | bias                | torch.float32 |         | -0.0593897        | 0.0563206        | -0.0048383     | 0.0008348             | torch.Size([24])                 |
| 2355    | torch.nn.modules.linear.Linear                                                    | head.layers.31.kps_generator.offset               | output              | torch.float32 |         | -2.9884589        | 4.1741672        | -0.0114157     | 0.7261086             | torch.Size([2, 512, 24])         |
| 2356    | torch.Tensor.view                                                                 | head.layers.31.kps_generator                      | input_0             | torch.float32 |         | -2.9884589        | 4.1741672        | -0.0114157     | 0.7261086             | torch.Size([2, 512, 24])         |
| 2356    | torch.Tensor.view                                                                 | head.layers.31.kps_generator                      | output              | torch.float32 |         | -2.9884589        | 4.1741672        | -0.0114157     | 0.7261086             | torch.Size([2, 512, 8, 3])       |
| 2357    | torch.Tensor.__getitem__                                                          | head.layers.31.kps_generator                      | input_0             | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2357    | torch.Tensor.__getitem__                                                          | head.layers.31.kps_generator                      | output              | torch.float32 |         | -53.4885979       | 53.6353264       | 0.8896438      | 291.2262573           | torch.Size([2, 512, 1, 3])       |
| 2358    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.kps_generator.keypoints_add        | input_0             | torch.float32 |         | -2.9884589        | 4.1741672        | -0.0114157     | 0.7261086             | torch.Size([2, 512, 8, 3])       |
| 2358    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.kps_generator.keypoints_add        | input_1             | torch.float32 |         | -53.4885979       | 53.6353264       | 0.8896438      | 291.2262573           | torch.Size([2, 512, 1, 3])       |
| 2358    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.kps_generator.keypoints_add        | output              | torch.float32 |         | -55.6243553       | 56.0939903       | 0.8782283      | 292.3339539           | torch.Size([2, 512, 8, 3])       |
| 2359    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.weight_add                         | input_0             | torch.float32 |         | -6.2538772        | 4.9512348        | -0.0001131     | 0.6297012             | torch.Size([2, 512, 256])        |
| 2359    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.weight_add                         | input_1             | torch.float32 |         | -1.7250781        | 7.3573351        | 0.0582166      | 0.8942280             | torch.Size([2, 512, 256])        |
| 2359    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.weight_add                         | output              | torch.float32 |         | -6.6703296        | 7.3313122        | 0.0581035      | 1.4316869             | torch.Size([2, 512, 256])        |
| 2360    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 2360    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 2361    | torch.Tensor.reshape                                                              | head.layers.31                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 2361    | torch.Tensor.reshape                                                              | head.layers.31                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 2362    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.0                   | input               | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 2362    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.0                   | weight              | torch.float32 |         | -0.6011963        | 0.6129394        | 0.0069147      | 0.0157550             | torch.Size([256, 12])            |
| 2362    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.0                   | bias                | torch.float32 |         | -0.3291516        | 0.3449677        | 0.0006622      | 0.0283183             | torch.Size([256])                |
| 2362    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.0                   | output              | torch.float32 |         | -1.3158653        | 1.2834746        | -0.0509084     | 0.2076911             | torch.Size([2, 6, 256])          |
| 2363    | torch.nn.modules.activation.ReLU                                                  | head.layers.31.camera_encoder.1                   | input               | torch.float32 |         | 0.0000000         | 1.2834746        | 0.1708098      | 0.0642929             | torch.Size([2, 6, 256])          |
| 2363    | torch.nn.modules.activation.ReLU                                                  | head.layers.31.camera_encoder.1                   | output              | torch.float32 |         | 0.0000000         | 1.2834746        | 0.1708098      | 0.0642929             | torch.Size([2, 6, 256])          |
| 2364    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 1.2834746        | 0.1708098      | 0.0642929             | torch.Size([2, 6, 256])          |
| 2364    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.input_mean.mean   | output              | torch.float32 |         | 0.1110955         | 0.1963482        | 0.1708098      | 0.0009058             | torch.Size([2, 6, 1])            |
| 2365    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.2.sub               | input_0             | torch.float32 |         | 0.0000000         | 1.2834746        | 0.1708098      | 0.0642929             | torch.Size([2, 6, 256])          |
| 2365    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.2.sub               | input_1             | torch.float32 |         | 0.1110955         | 0.1963482        | 0.1708098      | 0.0009058             | torch.Size([2, 6, 1])            |
| 2365    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.2.sub               | output              | torch.float32 |         | -0.1963482        | 1.0896572        | 0.0000000      | 0.0634623             | torch.Size([2, 6, 256])          |
| 2366    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.mul               | input_0             | torch.float32 |         | -0.1963482        | 1.0896572        | 0.0000000      | 0.0634623             | torch.Size([2, 6, 256])          |
| 2366    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.mul               | input_1             | torch.float32 |         | -0.1963482        | 1.0896572        | 0.0000000      | 0.0634623             | torch.Size([2, 6, 256])          |
| 2366    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.mul               | output              | torch.float32 |         | 0.0000000         | 1.1873528        | 0.0634416      | 0.0135826             | torch.Size([2, 6, 256])          |
| 2367    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.var_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 1.1873528        | 0.0634416      | 0.0135826             | torch.Size([2, 6, 256])          |
| 2367    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.2.var_mean.mean     | output              | torch.float32 |         | 0.0262578         | 0.0877452        | 0.0634416      | 0.0004312             | torch.Size([2, 6, 1])            |
| 2368    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.31.camera_encoder.2.rsqrt             | input               | torch.float32 |         | 0.0262578         | 0.0877452        | 0.0634416      | 0.0004312             | torch.Size([2, 6, 1])            |
| 2368    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.31.camera_encoder.2.rsqrt             | output              | torch.float32 |         | 3.3756974         | 6.1700463        | 4.1907306      | 0.9341974             | torch.Size([2, 6, 1])            |
| 2369    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.out_mul           | input_0             | torch.float32 |         | -0.1963482        | 1.0896572        | 0.0000000      | 0.0634623             | torch.Size([2, 6, 256])          |
| 2369    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.out_mul           | input_1             | torch.float32 |         | 3.3756974         | 6.1700463        | 4.1907306      | 0.9341974             | torch.Size([2, 6, 1])            |
| 2369    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.out_mul           | output              | torch.float32 |         | -0.7109775        | 4.8626375        | -0.0000000     | 1.0001414             | torch.Size([2, 6, 256])          |
| 2370    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.2.weight_quant      | input               | torch.float32 |         | 0.7249505         | 1.2187127        | 0.9718287      | 0.0056881             | torch.Size([256])                |
| 2370    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.2.weight_quant      | output              | torch.float32 |         | 0.7249505         | 1.2187127        | 0.9718287      | 0.0056881             | torch.Size([256])                |
| 2371    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.weight_mul        | input_0             | torch.float32 |         | -0.7109775        | 4.8626375        | -0.0000000     | 1.0001414             | torch.Size([2, 6, 256])          |
| 2371    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.weight_mul        | input_1             | torch.float32 |         | 0.7249505         | 1.2187127        | 0.9718287      | 0.0056881             | torch.Size([256])                |
| 2371    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.2.weight_mul        | output              | torch.float32 |         | -0.8318489        | 5.0973649        | 0.0099086      | 0.9658818             | torch.Size([2, 6, 256])          |
| 2372    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.2.bias_quant        | input               | torch.float32 |         | -0.1110947        | 0.1897046        | 0.0142131      | 0.0028453             | torch.Size([256])                |
| 2372    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.2.bias_quant        | output              | torch.float32 |         | -0.1110947        | 0.1897046        | 0.0142131      | 0.0028453             | torch.Size([256])                |
| 2373    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.2.bias_add          | input_0             | torch.float32 |         | -0.8318489        | 5.0973649        | 0.0099086      | 0.9658818             | torch.Size([2, 6, 256])          |
| 2373    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.2.bias_add          | input_1             | torch.float32 |         | -0.1110947        | 0.1897046        | 0.0142131      | 0.0028453             | torch.Size([256])                |
| 2373    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.2.bias_add          | output              | torch.float32 |         | -0.9038472        | 5.1706319        | 0.0241217      | 0.9558032             | torch.Size([2, 6, 256])          |
| 2374    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.3                   | input               | torch.float32 |         | -0.9038472        | 5.1706319        | 0.0241217      | 0.9558032             | torch.Size([2, 6, 256])          |
| 2374    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.3                   | weight              | torch.float32 |         | -0.4575176        | 0.4520092        | 0.0014985      | 0.0050318             | torch.Size([256, 256])           |
| 2374    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.3                   | bias                | torch.float32 |         | -0.0873436        | 0.3426891        | -0.0051534     | 0.0021563             | torch.Size([256])                |
| 2374    | torch.nn.modules.linear.Linear                                                    | head.layers.31.camera_encoder.3                   | output              | torch.float32 |         | -7.8885822        | 49.9677048       | -0.6051593     | 27.6833992            | torch.Size([2, 6, 256])          |
| 2375    | torch.nn.modules.activation.ReLU                                                  | head.layers.31.camera_encoder.4                   | input               | torch.float32 |         | 0.0000000         | 49.9677048       | 1.0462378      | 22.9525318            | torch.Size([2, 6, 256])          |
| 2375    | torch.nn.modules.activation.ReLU                                                  | head.layers.31.camera_encoder.4                   | output              | torch.float32 |         | 0.0000000         | 49.9677048       | 1.0462378      | 22.9525318            | torch.Size([2, 6, 256])          |
| 2376    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 49.9677048       | 1.0462378      | 22.9525318            | torch.Size([2, 6, 256])          |
| 2376    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.input_mean.mean   | output              | torch.float32 |         | 0.9989026         | 1.1142019        | 1.0462377      | 0.0016135             | torch.Size([2, 6, 1])            |
| 2377    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.5.sub               | input_0             | torch.float32 |         | 0.0000000         | 49.9677048       | 1.0462378      | 22.9525318            | torch.Size([2, 6, 256])          |
| 2377    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.5.sub               | input_1             | torch.float32 |         | 0.9989026         | 1.1142019        | 1.0462377      | 0.0016135             | torch.Size([2, 6, 1])            |
| 2377    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.camera_encoder.5.sub               | output              | torch.float32 |         | -1.1142019        | 48.9319839       | -0.0000000     | 22.9510536            | torch.Size([2, 6, 256])          |
| 2378    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.mul               | input_0             | torch.float32 |         | -1.1142019        | 48.9319839       | -0.0000000     | 22.9510536            | torch.Size([2, 6, 256])          |
| 2378    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.mul               | input_1             | torch.float32 |         | -1.1142019        | 48.9319839       | -0.0000000     | 22.9510536            | torch.Size([2, 6, 256])          |
| 2378    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.mul               | output              | torch.float32 |         | 0.0000040         | 2394.3391113     | 22.9435844     | 30014.3007812         | torch.Size([2, 6, 256])          |
| 2379    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.var_mean.mean     | input_0             | torch.float32 |         | 0.0000040         | 2394.3391113     | 22.9435844     | 30014.3007812         | torch.Size([2, 6, 256])          |
| 2379    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.31.camera_encoder.5.var_mean.mean     | output              | torch.float32 |         | 20.5911922        | 24.6743584       | 22.9435844     | 2.1798716             | torch.Size([2, 6, 1])            |
| 2380    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.31.camera_encoder.5.rsqrt             | input               | torch.float32 |         | 20.5911922        | 24.6743584       | 22.9435844     | 2.1798716             | torch.Size([2, 6, 1])            |
| 2380    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.31.camera_encoder.5.rsqrt             | output              | torch.float32 |         | 0.2013154         | 0.2203734        | 0.2090713      | 0.0000461             | torch.Size([2, 6, 1])            |
| 2381    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.out_mul           | input_0             | torch.float32 |         | -1.1142019        | 48.9319839       | -0.0000000     | 22.9510536            | torch.Size([2, 6, 256])          |
| 2381    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.out_mul           | input_1             | torch.float32 |         | 0.2013154         | 0.2203734        | 0.2090713      | 0.0000461             | torch.Size([2, 6, 1])            |
| 2381    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.out_mul           | output              | torch.float32 |         | -0.2403492        | 9.8655224        | 0.0000000      | 1.0003252             | torch.Size([2, 6, 256])          |
| 2382    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.5.weight_quant      | input               | torch.float32 |         | 0.4651215         | 1.3983060        | 0.8868107      | 0.0178757             | torch.Size([256])                |
| 2382    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.5.weight_quant      | output              | torch.float32 |         | 0.4651215         | 1.3983060        | 0.8868107      | 0.0178757             | torch.Size([256])                |
| 2383    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.weight_mul        | input_0             | torch.float32 |         | -0.2403492        | 9.8655224        | 0.0000000      | 1.0003252             | torch.Size([2, 6, 256])          |
| 2383    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.weight_mul        | input_1             | torch.float32 |         | 0.4651215         | 1.3983060        | 0.8868107      | 0.0178757             | torch.Size([256])                |
| 2383    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.camera_encoder.5.weight_mul        | output              | torch.float32 |         | -0.3360817        | 7.3869476        | -0.0267327     | 0.5148019             | torch.Size([2, 6, 256])          |
| 2384    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.5.bias_quant        | input               | torch.float32 |         | -0.4541008        | 0.5208398        | 0.0459723      | 0.0227529             | torch.Size([256])                |
| 2384    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.camera_encoder.5.bias_quant        | output              | torch.float32 |         | -0.4541008        | 0.5208398        | 0.0459723      | 0.0227529             | torch.Size([256])                |
| 2385    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.5.bias_add          | input_0             | torch.float32 |         | -0.3360817        | 7.3869476        | -0.0267327     | 0.5148019             | torch.Size([2, 6, 256])          |
| 2385    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.5.bias_add          | input_1             | torch.float32 |         | -0.4541008        | 0.5208398        | 0.0459723      | 0.0227529             | torch.Size([256])                |
| 2385    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.camera_encoder.5.bias_add          | output              | torch.float32 |         | -0.7555460        | 7.1605878        | 0.0192396      | 0.4758643             | torch.Size([2, 6, 256])          |
| 2386    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -6.6703296        | 7.3313122        | 0.0581035      | 1.4316869             | torch.Size([2, 512, 256])        |
| 2386    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -6.6703296        | 7.3313122        | 0.0581035      | 1.4316869             | torch.Size([2, 512, 1, 256])     |
| 2387    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -0.7555460        | 7.1605878        | 0.0192396      | 0.4758643             | torch.Size([2, 6, 256])          |
| 2387    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -0.7555460        | 7.1605878        | 0.0192396      | 0.4758643             | torch.Size([2, 1, 6, 256])       |
| 2388    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.cam_add                            | input_0             | torch.float32 |         | -6.6703296        | 7.3313122        | 0.0581035      | 1.4316869             | torch.Size([2, 512, 1, 256])     |
| 2388    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.cam_add                            | input_1             | torch.float32 |         | -0.7555460        | 7.1605878        | 0.0192396      | 0.4758643             | torch.Size([2, 1, 6, 256])       |
| 2388    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.31.cam_add                            | output              | torch.float32 |         | -6.2612815        | 9.3090687        | 0.0773431      | 1.8078310             | torch.Size([2, 512, 6, 256])     |
| 2389    | torch.nn.modules.linear.Linear                                                    | head.layers.31.weights_fc                         | input               | torch.float32 |         | -6.2612815        | 9.3090687        | 0.0773431      | 1.8078310             | torch.Size([2, 512, 6, 256])     |
| 2389    | torch.nn.modules.linear.Linear                                                    | head.layers.31.weights_fc                         | weight              | torch.float32 |         | -0.4320964        | 0.3347851        | 0.0003806      | 0.0034810             | torch.Size([64, 256])            |
| 2389    | torch.nn.modules.linear.Linear                                                    | head.layers.31.weights_fc                         | bias                | torch.float32 |         | -0.0894180        | 0.0804906        | -0.0091073     | 0.0015407             | torch.Size([64])                 |
| 2389    | torch.nn.modules.linear.Linear                                                    | head.layers.31.weights_fc                         | output              | torch.float32 |         | -12.1557045       | 6.0413985        | -0.4682193     | 7.0299344             | torch.Size([2, 512, 6, 64])      |
| 2390    | torch.Tensor.reshape                                                              | head.layers.31                                    | input_0             | torch.float32 |         | -12.1557045       | 6.0413985        | -0.4682193     | 7.0299344             | torch.Size([2, 512, 6, 64])      |
| 2390    | torch.Tensor.reshape                                                              | head.layers.31                                    | output              | torch.float32 |         | -12.1557045       | 6.0413985        | -0.4682193     | 7.0299344             | torch.Size([2, 512, 48, 8])      |
| 2391    | torch.Tensor.max                                                                  | head.layers.31.weight_softmax                     | input               | torch.float32 |         | -12.1557045       | 6.0413985        | -0.4682193     | 7.0299344             | torch.Size([2, 512, 48, 8])      |
| 2391    | torch.Tensor.max                                                                  | head.layers.31.weight_softmax                     | output_0            | torch.float32 |         | 1.2288781         | 6.0413985        | 3.1107299      | 0.5926782             | torch.Size([2, 512, 1, 8])       |
| 2391    | torch.Tensor.max                                                                  | head.layers.31.weight_softmax                     | output_1            | torch.int64   |         | 4.0000000         | 47.0000000       | 29.6107178     | 175.1494904           | torch.Size([2, 512, 1, 8])       |
| 2392    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.weight_softmax.sub                 | input_0             | torch.float32 |         | -12.1557045       | 6.0413985        | -0.4682193     | 7.0299344             | torch.Size([2, 512, 48, 8])      |
| 2392    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.weight_softmax.sub                 | input_1             | torch.float32 |         | 1.2288781         | 6.0413985        | 3.1107299      | 0.5926782             | torch.Size([2, 512, 1, 8])       |
| 2392    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.31.weight_softmax.sub                 | output              | torch.float32 |         | -16.5708981       | 0.0000000        | -3.5789490     | 7.1738210             | torch.Size([2, 512, 48, 8])      |
| 2393    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.31.weight_softmax.exp                 | input               | torch.float32 |         | -16.5708981       | 0.0000000        | -3.5789490     | 7.1738210             | torch.Size([2, 512, 48, 8])      |
| 2393    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.31.weight_softmax.exp                 | output              | torch.float32 |         | 0.0000001         | 1.0000000        | 0.1995672      | 0.0873134             | torch.Size([2, 512, 48, 8])      |
| 2394    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.31.weight_softmax.sum                 | input               | torch.float32 |         | 0.0000001         | 1.0000000        | 0.1995672      | 0.0873134             | torch.Size([2, 512, 48, 8])      |
| 2394    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.31.weight_softmax.sum                 | output              | torch.float32 |         | 3.9205604         | 21.4565487       | 9.5792294      | 6.0355058             | torch.Size([2, 512, 1, 8])       |
| 2395    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.31.weight_softmax.reciprocal          | input               | torch.float32 |         | 3.9205604         | 21.4565487       | 9.5792294      | 6.0355058             | torch.Size([2, 512, 1, 8])       |
| 2395    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.31.weight_softmax.reciprocal          | output              | torch.float32 |         | 0.0466058         | 0.2550656        | 0.1117294      | 0.0009023             | torch.Size([2, 512, 1, 8])       |
| 2396    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.weight_softmax.mul                 | input_0             | torch.float32 |         | 0.0000001         | 1.0000000        | 0.1995672      | 0.0873134             | torch.Size([2, 512, 48, 8])      |
| 2396    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.weight_softmax.mul                 | input_1             | torch.float32 |         | 0.0466058         | 0.2550656        | 0.1117294      | 0.0009023             | torch.Size([2, 512, 1, 8])       |
| 2396    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.weight_softmax.mul                 | output              | torch.float32 |         | 0.0000000         | 0.2550656        | 0.0208333      | 0.0010615             | torch.Size([2, 512, 48, 8])      |
| 2397    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -55.6243553       | 56.0939903       | 0.8782283      | 292.3339539           | torch.Size([2, 512, 8, 3])       |
| 2397    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -47.1987877       | 51.4335022       | 1.6367610      | 310.9832764           | torch.Size([2, 512, 8, 1])       |
| 2398    | torch.ones_like                                                                   | head.layers.31                                    | input               | torch.float32 |         | -47.1987877       | 51.4335022       | 1.6367610      | 310.9832764           | torch.Size([2, 512, 8, 1])       |
| 2398    | torch.ones_like                                                                   | head.layers.31                                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2399    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.point_quant_stub                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2399    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.31.point_quant_stub                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2400    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.point_cat                          | input_0             | torch.float32 |         | -55.6243553       | 56.0939903       | 0.8782283      | 292.3339539           | torch.Size([2, 512, 8, 3])       |
| 2400    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.point_cat                          | input_1             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2400    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.point_cat                          | output              | torch.float32 |         | -55.6243553       | 56.0939903       | 0.9086712      | 219.2510529           | torch.Size([2, 512, 8, 4])       |
| 2401    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 2401    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2402    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -55.6243553       | 56.0939903       | 0.9086712      | 219.2510529           | torch.Size([2, 512, 8, 4])       |
| 2402    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -55.6243553       | 56.0939903       | 0.9086712      | 219.2510529           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2403    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.point_matmul                       | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2403    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.point_matmul                       | input_1             | torch.float32 |         | -55.6243553       | 56.0939903       | 0.9086712      | 219.2510529           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2403    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.point_matmul                       | output              | torch.float32 |         | -82.6193161       | 83.7195740       | 0.2719835      | 99.4406967            | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2404    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.31.point_sum                          | input               | torch.float32 |         | -82.6193161       | 83.7195740       | 0.2719835      | 99.4406967            | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2404    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.31.point_sum                          | output              | torch.float32 |         | -88.7622681       | 90.8340607       | 1.0879343      | 390.8836060           | torch.Size([2, 6, 512, 8, 4])    |
| 2405    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -88.7622681       | 90.8340607       | 1.0879343      | 390.8836060           | torch.Size([2, 6, 512, 8, 4])    |
| 2405    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -56.9913521       | 55.3342209       | -0.5323923     | 433.1766663           | torch.Size([2, 6, 512, 8, 1])    |
| 2406    | torch.clamp                                                                       | head.layers.31                                    | input               | torch.float32 |         | -56.9913521       | 55.3342209       | -0.5323923     | 433.1766663           | torch.Size([2, 6, 512, 8, 1])    |
| 2406    | torch.clamp                                                                       | head.layers.31                                    | output              | torch.float32 |         | 0.0000100         | 55.3342209       | 7.4446430      | 154.3790283           | torch.Size([2, 6, 512, 8, 1])    |
| 2407    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.31.reciprocal_op                      | input               | torch.float32 |         | 0.0000100         | 55.3342209       | 7.4446430      | 154.3790283           | torch.Size([2, 6, 512, 8, 1])    |
| 2407    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.31.reciprocal_op                      | output              | torch.float32 |         | 0.0180720         | 100000.0000000   | 54030.6679688  | 2483772672.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 2408    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | -88.7622681       | 90.8340607       | 1.0879343      | 390.8836060           | torch.Size([2, 6, 512, 8, 4])    |
| 2408    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | -88.7622681       | 90.8340607       | 1.9420646      | 563.1389160           | torch.Size([2, 6, 512, 8, 2])    |
| 2409    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.point_mul                          | input_0             | torch.float32 |         | -88.7622681       | 90.8340607       | 1.9420646      | 563.1389160           | torch.Size([2, 6, 512, 8, 2])    |
| 2409    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.point_mul                          | input_1             | torch.float32 |         | 0.0180720         | 100000.0000000   | 54030.6679688  | 2483772672.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 2409    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.point_mul                          | output              | torch.float32 |         | -8744067.0000000  | 9083406.0000000  | 247201.7500000 | 2868962656256.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 2410    | torch.Tensor.flatten                                                              | head.layers.31                                    | input               | torch.float32 |         | -8744067.0000000  | 9083406.0000000  | 247201.7500000 | 2868962656256.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 2410    | torch.Tensor.flatten                                                              | head.layers.31                                    | output              | torch.float32 |         | -8744067.0000000  | 9083406.0000000  | 247201.7500000 | 2868962656256.0000000 | torch.Size([12, 512, 8, 2])      |
| 2411    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.31                                    | input_0             | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 2411    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.31                                    | input_1             | torch.float32 |         | -8744067.0000000  | 9083406.0000000  | 247201.7500000 | 2868962656256.0000000 | torch.Size([12, 512, 8, 2])      |
| 2411    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.31                                    | output              | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([12, 256, 512, 8])    |
| 2412    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.feat_cat                           | input               | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([12, 256, 512, 8])    |
| 2412    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.feat_cat                           | output              | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([12, 256, 512, 8])    |
| 2413    | torch.Tensor.view                                                                 | head.layers.31                                    | input_0             | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([12, 256, 512, 8])    |
| 2413    | torch.Tensor.view                                                                 | head.layers.31                                    | output              | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 6, 256, 512, 8])  |
| 2414    | torch.Tensor.permute                                                              | head.layers.31                                    | input_0             | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 6, 256, 512, 8])  |
| 2414    | torch.Tensor.permute                                                              | head.layers.31                                    | output              | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 6, 8, 256])  |
| 2415    | torch.Tensor.contiguous                                                           | head.layers.31                                    | input               | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 6, 8, 256])  |
| 2415    | torch.Tensor.contiguous                                                           | head.layers.31                                    | output              | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 6, 8, 256])  |
| 2416    | torch.Tensor.view                                                                 | head.layers.31                                    | input_0             | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 6, 8, 256])  |
| 2416    | torch.Tensor.view                                                                 | head.layers.31                                    | output              | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 48, 256])    |
| 2417    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | input_0             | torch.float32 |         | 0.0000000         | 0.2550656        | 0.0208333      | 0.0010615             | torch.Size([2, 512, 48, 8])      |
| 2417    | torch.Tensor.__getitem__                                                          | head.layers.31                                    | output              | torch.float32 |         | 0.0000000         | 0.2550656        | 0.0208333      | 0.0010615             | torch.Size([2, 512, 48, 8, 1])   |
| 2418    | torch.Tensor.reshape                                                              | head.layers.31                                    | input_0             | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 48, 256])    |
| 2418    | torch.Tensor.reshape                                                              | head.layers.31                                    | output              | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 48, 8, 32])  |
| 2419    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.feat_mul                           | input_0             | torch.float32 |         | 0.0000000         | 0.2550656        | 0.0208333      | 0.0010615             | torch.Size([2, 512, 48, 8, 1])   |
| 2419    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.feat_mul                           | input_1             | torch.float32 |         | -37.0876617       | 31.6109123       | 0.0281500      | 2.9241390             | torch.Size([2, 512, 48, 8, 32])  |
| 2419    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.31.feat_mul                           | output              | torch.float32 |         | -3.2408819        | 3.3187678        | 0.0005659      | 0.0043765             | torch.Size([2, 512, 48, 8, 32])  |
| 2420    | torch.Tensor.view                                                                 | head.layers.31                                    | input_0             | torch.float32 |         | -3.2408819        | 3.3187678        | 0.0005659      | 0.0043765             | torch.Size([2, 512, 48, 8, 32])  |
| 2420    | torch.Tensor.view                                                                 | head.layers.31                                    | output              | torch.float32 |         | -3.2408819        | 3.3187678        | 0.0005659      | 0.0043765             | torch.Size([2, 512, 48, 256])    |
| 2421    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.31.feat_sum                           | input               | torch.float32 |         | -3.2408819        | 3.3187678        | 0.0005659      | 0.0043765             | torch.Size([2, 512, 48, 256])    |
| 2421    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.31.feat_sum                           | output              | torch.float32 |         | -5.3141131        | 5.1716299        | 0.0271626      | 0.4215531             | torch.Size([2, 512, 256])        |
| 2422    | torch.nn.modules.linear.Linear                                                    | head.layers.31.output_proj                        | input               | torch.float32 |         | -5.3141131        | 5.1716299        | 0.0271626      | 0.4215531             | torch.Size([2, 512, 256])        |
| 2422    | torch.nn.modules.linear.Linear                                                    | head.layers.31.output_proj                        | weight              | torch.float32 |         | -0.3630883        | 0.3866604        | -0.0003614     | 0.0071088             | torch.Size([256, 256])           |
| 2422    | torch.nn.modules.linear.Linear                                                    | head.layers.31.output_proj                        | bias                | torch.float32 |         | -0.1024493        | 0.1036076        | 0.0021211      | 0.0015196             | torch.Size([256])                |
| 2422    | torch.nn.modules.linear.Linear                                                    | head.layers.31.output_proj                        | output              | torch.float32 |         | -9.1042051        | 8.1927261        | -0.0093622     | 0.7129421             | torch.Size([2, 512, 256])        |
| 2423    | torch.nn.modules.dropout.Dropout                                                  | head.layers.31.proj_drop                          | input               | torch.float32 |         | -9.1042051        | 8.1927261        | -0.0093622     | 0.7129421             | torch.Size([2, 512, 256])        |
| 2423    | torch.nn.modules.dropout.Dropout                                                  | head.layers.31.proj_drop                          | output              | torch.float32 |         | -9.1042051        | 8.1927261        | -0.0093622     | 0.7129421             | torch.Size([2, 512, 256])        |
| 2424    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.residual_op                        | input_0             | torch.float32 |         | -9.1042051        | 8.1927261        | -0.0093622     | 0.7129421             | torch.Size([2, 512, 256])        |
| 2424    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.residual_op                        | input_1             | torch.float32 |         | -6.2538772        | 4.9512348        | -0.0001131     | 0.6297012             | torch.Size([2, 512, 256])        |
| 2424    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.31.residual_op                        | output              | torch.float32 |         | -9.1042051        | 8.1927261        | -0.0047377     | 0.6713418             | torch.Size([2, 512, 512])        |
| 2425    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.input_mean.mean           | input_0             | torch.float32 |         | -9.1042051        | 8.1927261        | -0.0047377     | 0.6713418             | torch.Size([2, 512, 512])        |
| 2425    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.input_mean.mean           | output              | torch.float32 |         | -0.1058083        | 0.0553006        | -0.0047377     | 0.0002916             | torch.Size([2, 512, 1])          |
| 2426    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.32.pre_norm.sub                       | input_0             | torch.float32 |         | -9.1042051        | 8.1927261        | -0.0047377     | 0.6713418             | torch.Size([2, 512, 512])        |
| 2426    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.32.pre_norm.sub                       | input_1             | torch.float32 |         | -0.1058083        | 0.0553006        | -0.0047377     | 0.0002916             | torch.Size([2, 512, 1])          |
| 2426    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.32.pre_norm.sub                       | output              | torch.float32 |         | -8.9983969        | 8.2119102        | -0.0000000     | 0.6710504             | torch.Size([2, 512, 512])        |
| 2427    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.mul                       | input_0             | torch.float32 |         | -8.9983969        | 8.2119102        | -0.0000000     | 0.6710504             | torch.Size([2, 512, 512])        |
| 2427    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.mul                       | input_1             | torch.float32 |         | -8.9983969        | 8.2119102        | -0.0000000     | 0.6710504             | torch.Size([2, 512, 512])        |
| 2427    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.mul                       | output              | torch.float32 |         | 0.0000000         | 80.9711456       | 0.6710492      | 5.0183210             | torch.Size([2, 512, 512])        |
| 2428    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 80.9711456       | 0.6710492      | 5.0183210             | torch.Size([2, 512, 512])        |
| 2428    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.32.pre_norm.var_mean.mean             | output              | torch.float32 |         | 0.3726116         | 3.7209675        | 0.6710492      | 0.0924045             | torch.Size([2, 512, 1])          |
| 2429    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.32.pre_norm.rsqrt                     | input               | torch.float32 |         | 0.3726116         | 3.7209675        | 0.6710492      | 0.0924045             | torch.Size([2, 512, 1])          |
| 2429    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.32.pre_norm.rsqrt                     | output              | torch.float32 |         | 0.5184078         | 1.6381963        | 1.2811863      | 0.0431694             | torch.Size([2, 512, 1])          |
| 2430    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.out_mul                   | input_0             | torch.float32 |         | -8.9983969        | 8.2119102        | -0.0000000     | 0.6710504             | torch.Size([2, 512, 512])        |
| 2430    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.out_mul                   | input_1             | torch.float32 |         | 0.5184078         | 1.6381963        | 1.2811863      | 0.0431694             | torch.Size([2, 512, 1])          |
| 2430    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.out_mul                   | output              | torch.float32 |         | -9.0218182        | 7.6026726        | -0.0000000     | 0.9999851             | torch.Size([2, 512, 512])        |
| 2431    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.32.pre_norm.weight_quant              | input               | torch.float32 |         | 0.6255694         | 1.5848855        | 1.0149837      | 0.0841199             | torch.Size([512])                |
| 2431    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.32.pre_norm.weight_quant              | output              | torch.float32 |         | 0.6255694         | 1.5848855        | 1.0149837      | 0.0841199             | torch.Size([512])                |
| 2432    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.weight_mul                | input_0             | torch.float32 |         | -9.0218182        | 7.6026726        | -0.0000000     | 0.9999851             | torch.Size([2, 512, 512])        |
| 2432    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.weight_mul                | input_1             | torch.float32 |         | 0.6255694         | 1.5848855        | 1.0149837      | 0.0841199             | torch.Size([512])                |
| 2432    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.32.pre_norm.weight_mul                | output              | torch.float32 |         | -6.5038953        | 5.7393680        | 0.0093286      | 0.7942370             | torch.Size([2, 512, 512])        |
| 2433    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.32.pre_norm.bias_quant                | input               | torch.float32 |         | -0.1540265        | 0.1764562        | -0.0054709     | 0.0019368             | torch.Size([512])                |
| 2433    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.32.pre_norm.bias_quant                | output              | torch.float32 |         | -0.1540265        | 0.1764562        | -0.0054709     | 0.0019368             | torch.Size([512])                |
| 2434    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.32.pre_norm.bias_add                  | input_0             | torch.float32 |         | -6.5038953        | 5.7393680        | 0.0093286      | 0.7942370             | torch.Size([2, 512, 512])        |
| 2434    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.32.pre_norm.bias_add                  | input_1             | torch.float32 |         | -0.1540265        | 0.1764562        | -0.0054709     | 0.0019368             | torch.Size([512])                |
| 2434    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.32.pre_norm.bias_add                  | output              | torch.float32 |         | -6.3343501        | 5.7370782        | 0.0038577      | 0.7856006             | torch.Size([2, 512, 512])        |
| 2435    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.0.0                         | input               | torch.float32 |         | -6.3343501        | 5.7370782        | 0.0038577      | 0.7856006             | torch.Size([2, 512, 512])        |
| 2435    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.0.0                         | weight              | torch.float32 |         | -0.4811940        | 0.5423552        | -0.0007460     | 0.0070652             | torch.Size([1024, 512])          |
| 2435    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.0.0                         | bias                | torch.float32 |         | -0.2153661        | 0.0513395        | -0.0674493     | 0.0012690             | torch.Size([1024])               |
| 2435    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.0.0                         | output              | torch.float32 |         | -18.6205215       | 14.5785351       | -3.2416220     | 10.4741316            | torch.Size([2, 512, 1024])       |
| 2436    | torch.nn.modules.activation.ReLU                                                  | head.layers.32.activate                           | input               | torch.float32 |         | 0.0000000         | 14.5785351       | 0.2912122      | 1.1158960             | torch.Size([2, 512, 1024])       |
| 2436    | torch.nn.modules.activation.ReLU                                                  | head.layers.32.activate                           | output              | torch.float32 |         | 0.0000000         | 14.5785351       | 0.2912122      | 1.1158960             | torch.Size([2, 512, 1024])       |
| 2437    | torch.nn.modules.dropout.Dropout                                                  | head.layers.32.layers.0.2                         | input               | torch.float32 |         | 0.0000000         | 14.5785351       | 0.2912122      | 1.1158960             | torch.Size([2, 512, 1024])       |
| 2437    | torch.nn.modules.dropout.Dropout                                                  | head.layers.32.layers.0.2                         | output              | torch.float32 |         | 0.0000000         | 14.5785351       | 0.2912122      | 1.1158960             | torch.Size([2, 512, 1024])       |
| 2438    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.1                           | input               | torch.float32 |         | 0.0000000         | 14.5785351       | 0.2912122      | 1.1158960             | torch.Size([2, 512, 1024])       |
| 2438    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.1                           | weight              | torch.float32 |         | -0.5106656        | 0.5106861        | 0.0000796      | 0.0075136             | torch.Size([256, 1024])          |
| 2438    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.1                           | bias                | torch.float32 |         | -0.1172329        | 0.0823930        | -0.0002596     | 0.0010212             | torch.Size([256])                |
| 2438    | torch.nn.modules.linear.Linear                                                    | head.layers.32.layers.1                           | output              | torch.float32 |         | -32.2150879       | 34.6668205       | 0.0817522      | 37.9320374            | torch.Size([2, 512, 256])        |
| 2439    | torch.nn.modules.dropout.Dropout                                                  | head.layers.32.layers.2                           | input               | torch.float32 |         | -32.2150879       | 34.6668205       | 0.0817522      | 37.9320374            | torch.Size([2, 512, 256])        |
| 2439    | torch.nn.modules.dropout.Dropout                                                  | head.layers.32.layers.2                           | output              | torch.float32 |         | -32.2150879       | 34.6668205       | 0.0817522      | 37.9320374            | torch.Size([2, 512, 256])        |
| 2440    | torch.nn.modules.linear.Linear                                                    | head.layers.32.identity_fc                        | input               | torch.float32 |         | -6.3343501        | 5.7370782        | 0.0038577      | 0.7856006             | torch.Size([2, 512, 512])        |
| 2440    | torch.nn.modules.linear.Linear                                                    | head.layers.32.identity_fc                        | weight              | torch.float32 |         | -0.4469438        | 0.4948564        | -0.0002955     | 0.0082387             | torch.Size([256, 512])           |
| 2440    | torch.nn.modules.linear.Linear                                                    | head.layers.32.identity_fc                        | bias                | torch.float32 |         | -0.1482334        | 0.0840410        | -0.0011662     | 0.0011191             | torch.Size([256])                |
| 2440    | torch.nn.modules.linear.Linear                                                    | head.layers.32.identity_fc                        | output              | torch.float32 |         | -16.8191376       | 18.6003418       | 0.0093829      | 13.6387882            | torch.Size([2, 512, 256])        |
| 2441    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.32.short_add                          | input_0             | torch.float32 |         | -16.8191376       | 18.6003418       | 0.0093829      | 13.6387882            | torch.Size([2, 512, 256])        |
| 2441    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.32.short_add                          | input_1             | torch.float32 |         | -32.2150879       | 34.6668205       | 0.0817522      | 37.9320374            | torch.Size([2, 512, 256])        |
| 2441    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.32.short_add                          | output              | torch.float32 |         | -46.4351959       | 44.8732147       | 0.0911351      | 69.2337189            | torch.Size([2, 512, 256])        |
| 2442    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.33.input_mean.mean                    | input_0             | torch.float32 |         | -46.4351959       | 44.8732147       | 0.0911351      | 69.2337189            | torch.Size([2, 512, 256])        |
| 2442    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.33.input_mean.mean                    | output              | torch.float32 |         | -0.1230062        | 0.3169204        | 0.0911351      | 0.0185266             | torch.Size([2, 512, 1])          |
| 2443    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.33.sub                                | input_0             | torch.float32 |         | -46.4351959       | 44.8732147       | 0.0911351      | 69.2337189            | torch.Size([2, 512, 256])        |
| 2443    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.33.sub                                | input_1             | torch.float32 |         | -0.1230062        | 0.3169204        | 0.0911351      | 0.0185266             | torch.Size([2, 512, 1])          |
| 2443    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.33.sub                                | output              | torch.float32 |         | -46.7521172       | 44.5667839       | -0.0000000     | 69.2152176            | torch.Size([2, 512, 256])        |
| 2444    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.mul                                | input_0             | torch.float32 |         | -46.7521172       | 44.5667839       | -0.0000000     | 69.2152176            | torch.Size([2, 512, 256])        |
| 2444    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.mul                                | input_1             | torch.float32 |         | -46.7521172       | 44.5667839       | -0.0000000     | 69.2152176            | torch.Size([2, 512, 256])        |
| 2444    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.mul                                | output              | torch.float32 |         | 0.0000000         | 2185.7604980     | 69.2149506     | 32862.5507812         | torch.Size([2, 512, 256])        |
| 2445    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.33.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 2185.7604980     | 69.2149506     | 32862.5507812         | torch.Size([2, 512, 256])        |
| 2445    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.33.var_mean.mean                      | output              | torch.float32 |         | 6.9833560         | 239.9463196      | 69.2149506     | 8379.8085938          | torch.Size([2, 512, 1])          |
| 2446    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.33.rsqrt                              | input               | torch.float32 |         | 6.9833560         | 239.9463196      | 69.2149506     | 8379.8085938          | torch.Size([2, 512, 1])          |
| 2446    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.33.rsqrt                              | output              | torch.float32 |         | 0.0645569         | 0.3784144        | 0.2170737      | 0.0099698             | torch.Size([2, 512, 1])          |
| 2447    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.out_mul                            | input_0             | torch.float32 |         | -46.7521172       | 44.5667839       | -0.0000000     | 69.2152176            | torch.Size([2, 512, 256])        |
| 2447    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.out_mul                            | input_1             | torch.float32 |         | 0.0645569         | 0.3784144        | 0.2170737      | 0.0099698             | torch.Size([2, 512, 1])          |
| 2447    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.out_mul                            | output              | torch.float32 |         | -5.2169042        | 5.7444468        | -0.0000000     | 1.0000032             | torch.Size([2, 512, 256])        |
| 2448    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.33.weight_quant                       | input               | torch.float32 |         | 0.5037270         | 1.1255741        | 0.9008017      | 0.0102990             | torch.Size([256])                |
| 2448    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.33.weight_quant                       | output              | torch.float32 |         | 0.5037270         | 1.1255741        | 0.9008017      | 0.0102990             | torch.Size([256])                |
| 2449    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.weight_mul                         | input_0             | torch.float32 |         | -5.2169042        | 5.7444468        | -0.0000000     | 1.0000032             | torch.Size([2, 512, 256])        |
| 2449    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.weight_mul                         | input_1             | torch.float32 |         | 0.5037270         | 1.1255741        | 0.9008017      | 0.0102990             | torch.Size([256])                |
| 2449    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.33.weight_mul                         | output              | torch.float32 |         | -4.3914061        | 5.3037744        | 0.0010537      | 0.8226817             | torch.Size([2, 512, 256])        |
| 2450    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.33.bias_quant                         | input               | torch.float32 |         | -0.0986191        | 0.1023723        | 0.0041659      | 0.0009013             | torch.Size([256])                |
| 2450    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.33.bias_quant                         | output              | torch.float32 |         | -0.0986191        | 0.1023723        | 0.0041659      | 0.0009013             | torch.Size([256])                |
| 2451    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.33.bias_add                           | input_0             | torch.float32 |         | -4.3914061        | 5.3037744        | 0.0010537      | 0.8226817             | torch.Size([2, 512, 256])        |
| 2451    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.33.bias_add                           | input_1             | torch.float32 |         | -0.0986191        | 0.1023723        | 0.0041659      | 0.0009013             | torch.Size([256])                |
| 2451    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.33.bias_add                           | output              | torch.float32 |         | -4.3518085        | 5.2251782        | 0.0052197      | 0.8203376             | torch.Size([2, 512, 256])        |
| 2452    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.add1                               | input_0             | torch.float32 |         | -4.3518085        | 5.2251782        | 0.0052197      | 0.8203376             | torch.Size([2, 512, 256])        |
| 2452    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.add1                               | input_1             | torch.float32 |         | -1.7250781        | 7.3573351        | 0.0582166      | 0.8942280             | torch.Size([2, 512, 256])        |
| 2452    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.add1                               | output              | torch.float32 |         | -4.1016655        | 7.6160474        | 0.0634363      | 1.4182142             | torch.Size([2, 512, 256])        |
| 2453    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.0                           | input               | torch.float32 |         | -4.1016655        | 7.6160474        | 0.0634363      | 1.4182142             | torch.Size([2, 512, 256])        |
| 2453    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.0                           | weight              | torch.float32 |         | -0.6512140        | 0.6423623        | 0.0001085      | 0.0063452             | torch.Size([256, 256])           |
| 2453    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.0                           | bias                | torch.float32 |         | -0.1916889        | 0.1006546        | -0.0401542     | 0.0026011             | torch.Size([256])                |
| 2453    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.0                           | output              | torch.float32 |         | -10.9200268       | 11.6324425       | -0.9697115     | 4.6656466             | torch.Size([2, 512, 256])        |
| 2454    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.1                           | input               | torch.float32 |         | 0.0000000         | 11.6324425       | 0.4407123      | 0.8166509             | torch.Size([2, 512, 256])        |
| 2454    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.1                           | output              | torch.float32 |         | 0.0000000         | 11.6324425       | 0.4407123      | 0.8166509             | torch.Size([2, 512, 256])        |
| 2455    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.2                           | input               | torch.float32 |         | 0.0000000         | 11.6324425       | 0.4407123      | 0.8166509             | torch.Size([2, 512, 256])        |
| 2455    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.2                           | weight              | torch.float32 |         | -0.5759249        | 0.3917674        | -0.0049621     | 0.0060694             | torch.Size([256, 256])           |
| 2455    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.2                           | bias                | torch.float32 |         | -0.1434172        | 0.2241302        | -0.0092912     | 0.0040547             | torch.Size([256])                |
| 2455    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.2                           | output              | torch.float32 |         | -13.0451746       | 9.0334177        | -0.4976707     | 3.0915117             | torch.Size([2, 512, 256])        |
| 2456    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.3                           | input               | torch.float32 |         | 0.0000000         | 9.0334177        | 0.4586229      | 0.6450561             | torch.Size([2, 512, 256])        |
| 2456    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.3                           | output              | torch.float32 |         | 0.0000000         | 9.0334177        | 0.4586229      | 0.6450561             | torch.Size([2, 512, 256])        |
| 2457    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 9.0334177        | 0.4586229      | 0.6450561             | torch.Size([2, 512, 256])        |
| 2457    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.input_mean.mean           | output              | torch.float32 |         | 0.2731480         | 0.7757403        | 0.4586229      | 0.0063968             | torch.Size([2, 512, 1])          |
| 2458    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.34.layers.4.sub                       | input_0             | torch.float32 |         | 0.0000000         | 9.0334177        | 0.4586229      | 0.6450561             | torch.Size([2, 512, 256])        |
| 2458    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.34.layers.4.sub                       | input_1             | torch.float32 |         | 0.2731480         | 0.7757403        | 0.4586229      | 0.0063968             | torch.Size([2, 512, 1])          |
| 2458    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.34.layers.4.sub                       | output              | torch.float32 |         | -0.7757403        | 8.3857679        | 0.0000000      | 0.6386655             | torch.Size([2, 512, 256])        |
| 2459    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.mul                       | input_0             | torch.float32 |         | -0.7757403        | 8.3857679        | 0.0000000      | 0.6386655             | torch.Size([2, 512, 256])        |
| 2459    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.mul                       | input_1             | torch.float32 |         | -0.7757403        | 8.3857679        | 0.0000000      | 0.6386655             | torch.Size([2, 512, 256])        |
| 2459    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.mul                       | output              | torch.float32 |         | 0.0000000         | 70.3211060       | 0.6386631      | 2.8173978             | torch.Size([2, 512, 256])        |
| 2460    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 70.3211060       | 0.6386631      | 2.8173978             | torch.Size([2, 512, 256])        |
| 2460    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.4.var_mean.mean             | output              | torch.float32 |         | 0.2145387         | 1.8852019        | 0.6386631      | 0.0517519             | torch.Size([2, 512, 1])          |
| 2461    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.34.layers.4.rsqrt                     | input               | torch.float32 |         | 0.2145387         | 1.8852019        | 0.6386631      | 0.0517519             | torch.Size([2, 512, 1])          |
| 2461    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.34.layers.4.rsqrt                     | output              | torch.float32 |         | 0.7283161         | 2.1589224        | 1.3090870      | 0.0523405             | torch.Size([2, 512, 1])          |
| 2462    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.out_mul                   | input_0             | torch.float32 |         | -0.7757403        | 8.3857679        | 0.0000000      | 0.6386655             | torch.Size([2, 512, 256])        |
| 2462    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.out_mul                   | input_1             | torch.float32 |         | 0.7283161         | 2.1589224        | 1.3090870      | 0.0523405             | torch.Size([2, 512, 1])          |
| 2462    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.out_mul                   | output              | torch.float32 |         | -0.6835904        | 7.5336299        | 0.0000000      | 0.9999862             | torch.Size([2, 512, 256])        |
| 2463    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.4.weight_quant              | input               | torch.float32 |         | 0.6686562         | 1.1948749        | 0.9568136      | 0.0086885             | torch.Size([256])                |
| 2463    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.4.weight_quant              | output              | torch.float32 |         | 0.6686562         | 1.1948749        | 0.9568136      | 0.0086885             | torch.Size([256])                |
| 2464    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.weight_mul                | input_0             | torch.float32 |         | -0.6835904        | 7.5336299        | 0.0000000      | 0.9999862             | torch.Size([2, 512, 256])        |
| 2464    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.weight_mul                | input_1             | torch.float32 |         | 0.6686562         | 1.1948749        | 0.9568136      | 0.0086885             | torch.Size([256])                |
| 2464    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.4.weight_mul                | output              | torch.float32 |         | -0.8168049        | 7.5867405        | 0.0151543      | 0.9729467             | torch.Size([2, 512, 256])        |
| 2465    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.4.bias_quant                | input               | torch.float32 |         | -0.1362740        | 0.3444038        | 0.0655811      | 0.0123684             | torch.Size([256])                |
| 2465    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.4.bias_quant                | output              | torch.float32 |         | -0.1362740        | 0.3444038        | 0.0655811      | 0.0123684             | torch.Size([256])                |
| 2466    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.layers.4.bias_add                  | input_0             | torch.float32 |         | -0.8168049        | 7.5867405        | 0.0151543      | 0.9729467             | torch.Size([2, 512, 256])        |
| 2466    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.layers.4.bias_add                  | input_1             | torch.float32 |         | -0.1362740        | 0.3444038        | 0.0655811      | 0.0123684             | torch.Size([256])                |
| 2466    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.layers.4.bias_add                  | output              | torch.float32 |         | -0.8248793        | 7.6168127        | 0.0807354      | 0.9206790             | torch.Size([2, 512, 256])        |
| 2467    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.5                           | input               | torch.float32 |         | -0.8248793        | 7.6168127        | 0.0807354      | 0.9206790             | torch.Size([2, 512, 256])        |
| 2467    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.5                           | weight              | torch.float32 |         | -0.5576459        | 0.4978588        | 0.0026662      | 0.0046605             | torch.Size([256, 256])           |
| 2467    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.5                           | bias                | torch.float32 |         | -0.1226624        | 0.0810974        | -0.0227243     | 0.0021841             | torch.Size([256])                |
| 2467    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.5                           | output              | torch.float32 |         | -8.6922178        | 9.6573706        | -0.5669703     | 3.6397641             | torch.Size([2, 512, 256])        |
| 2468    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.6                           | input               | torch.float32 |         | 0.0000000         | 9.6573706        | 0.5251758      | 1.0623730             | torch.Size([2, 512, 256])        |
| 2468    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.6                           | output              | torch.float32 |         | 0.0000000         | 9.6573706        | 0.5251758      | 1.0623730             | torch.Size([2, 512, 256])        |
| 2469    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.7                           | input               | torch.float32 |         | 0.0000000         | 9.6573706        | 0.5251758      | 1.0623730             | torch.Size([2, 512, 256])        |
| 2469    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.7                           | weight              | torch.float32 |         | -0.4486472        | 0.5366535        | -0.0039619     | 0.0033260             | torch.Size([256, 256])           |
| 2469    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.7                           | bias                | torch.float32 |         | -0.0953889        | 0.2466190        | -0.0158665     | 0.0018828             | torch.Size([256])                |
| 2469    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.7                           | output              | torch.float32 |         | -10.4222651       | 30.5207767       | -1.1838269     | 5.3266363             | torch.Size([2, 512, 256])        |
| 2470    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.8                           | input               | torch.float32 |         | 0.0000000         | 30.5207767       | 0.4571123      | 2.0142655             | torch.Size([2, 512, 256])        |
| 2470    | torch.nn.modules.activation.ReLU                                                  | head.layers.34.layers.8                           | output              | torch.float32 |         | 0.0000000         | 30.5207767       | 0.4571123      | 2.0142655             | torch.Size([2, 512, 256])        |
| 2471    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 30.5207767       | 0.4571123      | 2.0142655             | torch.Size([2, 512, 256])        |
| 2471    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.input_mean.mean           | output              | torch.float32 |         | 0.2118180         | 1.1859132        | 0.4571123      | 0.0184832             | torch.Size([2, 512, 1])          |
| 2472    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.34.layers.9.sub                       | input_0             | torch.float32 |         | 0.0000000         | 30.5207767       | 0.4571123      | 2.0142655             | torch.Size([2, 512, 256])        |
| 2472    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.34.layers.9.sub                       | input_1             | torch.float32 |         | 0.2118180         | 1.1859132        | 0.4571123      | 0.0184832             | torch.Size([2, 512, 1])          |
| 2472    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.34.layers.9.sub                       | output              | torch.float32 |         | -1.1859132        | 30.2908478       | 0.0000000      | 1.9958003             | torch.Size([2, 512, 256])        |
| 2473    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.mul                       | input_0             | torch.float32 |         | -1.1859132        | 30.2908478       | 0.0000000      | 1.9958003             | torch.Size([2, 512, 256])        |
| 2473    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.mul                       | input_1             | torch.float32 |         | -1.1859132        | 30.2908478       | 0.0000000      | 1.9958003             | torch.Size([2, 512, 256])        |
| 2473    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.mul                       | output              | torch.float32 |         | 0.0000000         | 917.5354614      | 1.9957927      | 313.0202942           | torch.Size([2, 512, 256])        |
| 2474    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 917.5354614      | 1.9957927      | 313.0202942           | torch.Size([2, 512, 256])        |
| 2474    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.34.layers.9.var_mean.mean             | output              | torch.float32 |         | 0.5004920         | 4.8017111        | 1.9957929      | 0.3213030             | torch.Size([2, 512, 1])          |
| 2475    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.34.layers.9.rsqrt                     | input               | torch.float32 |         | 0.5004920         | 4.8017111        | 1.9957929      | 0.3213030             | torch.Size([2, 512, 1])          |
| 2475    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.34.layers.9.rsqrt                     | output              | torch.float32 |         | 0.4563537         | 1.4135041        | 0.7292829      | 0.0114595             | torch.Size([2, 512, 1])          |
| 2476    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.out_mul                   | input_0             | torch.float32 |         | -1.1859132        | 30.2908478       | 0.0000000      | 1.9958003             | torch.Size([2, 512, 256])        |
| 2476    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.out_mul                   | input_1             | torch.float32 |         | 0.4563537         | 1.4135041        | 0.7292829      | 0.0114595             | torch.Size([2, 512, 1])          |
| 2476    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.out_mul                   | output              | torch.float32 |         | -0.5674778        | 15.0141726       | 0.0000000      | 0.9999983             | torch.Size([2, 512, 256])        |
| 2477    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.9.weight_quant              | input               | torch.float32 |         | 0.7519886         | 1.2372242        | 0.9132024      | 0.0028244             | torch.Size([256])                |
| 2477    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.9.weight_quant              | output              | torch.float32 |         | 0.7519886         | 1.2372242        | 0.9132024      | 0.0028244             | torch.Size([256])                |
| 2478    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.weight_mul                | input_0             | torch.float32 |         | -0.5674778        | 15.0141726       | 0.0000000      | 0.9999983             | torch.Size([2, 512, 256])        |
| 2478    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.weight_mul                | input_1             | torch.float32 |         | 0.7519886         | 1.2372242        | 0.9132024      | 0.0028244             | torch.Size([256])                |
| 2478    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.9.weight_mul                | output              | torch.float32 |         | -0.7020973        | 11.2904863       | 0.0013984      | 0.7617722             | torch.Size([2, 512, 256])        |
| 2479    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.9.bias_quant                | input               | torch.float32 |         | -0.2334981        | 0.1167177        | 0.0665926      | 0.0030043             | torch.Size([256])                |
| 2479    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.9.bias_quant                | output              | torch.float32 |         | -0.2334981        | 0.1167177        | 0.0665926      | 0.0030043             | torch.Size([256])                |
| 2480    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.layers.9.bias_add                  | input_0             | torch.float32 |         | -0.7020973        | 11.2904863       | 0.0013984      | 0.7617722             | torch.Size([2, 512, 256])        |
| 2480    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.layers.9.bias_add                  | input_1             | torch.float32 |         | -0.2334981        | 0.1167177        | 0.0665926      | 0.0030043             | torch.Size([256])                |
| 2480    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.layers.9.bias_add                  | output              | torch.float32 |         | -0.6603624        | 11.0569878       | 0.0679910      | 0.7186026             | torch.Size([2, 512, 256])        |
| 2481    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.10                          | input               | torch.float32 |         | -0.6603624        | 11.0569878       | 0.0679910      | 0.7186026             | torch.Size([2, 512, 256])        |
| 2481    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.10                          | weight              | torch.float32 |         | -0.4327374        | 0.5036364        | -0.0011054     | 0.0035315             | torch.Size([11, 256])            |
| 2481    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.10                          | bias                | torch.float32 |         | -0.0496347        | 0.0377057        | -0.0115086     | 0.0009391             | torch.Size([11])                 |
| 2481    | torch.nn.modules.linear.Linear                                                    | head.layers.34.layers.10                          | output              | torch.float32 |         | -9.9552765        | 14.5157642       | 0.0671444      | 2.9367282             | torch.Size([2, 512, 11])         |
| 2482    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.11.scale_quant_stub         | input               | torch.float32 |         | 0.0472322         | 0.3123411        | 0.1293254      | 0.0056835             | torch.Size([11])                 |
| 2482    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.34.layers.11.scale_quant_stub         | output              | torch.float32 |         | 0.0472322         | 0.3123411        | 0.1293254      | 0.0056835             | torch.Size([11])                 |
| 2483    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.11.mul                      | input_0             | torch.float32 |         | -9.9552765        | 14.5157642       | 0.0671444      | 2.9367282             | torch.Size([2, 512, 11])         |
| 2483    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.11.mul                      | input_1             | torch.float32 |         | 0.0472322         | 0.3123411        | 0.1293254      | 0.0056835             | torch.Size([11])                 |
| 2483    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.34.layers.11.mul                      | output              | torch.float32 |         | -1.3893650        | 1.6646321        | 0.0299268      | 0.0855665             | torch.Size([2, 512, 11])         |
| 2484    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.add2                               | input_0             | torch.float32 |         | -1.3893650        | 1.6646321        | 0.0299268      | 0.0855665             | torch.Size([2, 512, 11])         |
| 2484    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.add2                               | input_1             | torch.float32 |         | -53.4885979       | 53.6353264       | 0.2314168      | 80.4070053            | torch.Size([2, 512, 11])         |
| 2484    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.34.add2                               | output              | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2485    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(4)                                   | input               | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2485    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(4)                                   | output              | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2486    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2486    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -53.3880920       | 53.3906403       | 1.0072179      | 290.4691772           | torch.Size([2, 512, 3])          |
| 2487    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(6)                   | input               | torch.float32 |         | -53.3880920       | 53.3906403       | 1.0072179      | 290.4691772           | torch.Size([2, 512, 3])          |
| 2487    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(6)                   | weight              | torch.float32 |         | -0.9216561        | 0.9167990        | -0.0046354     | 0.1373587             | torch.Size([128, 3])             |
| 2487    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(6)                   | bias                | torch.float32 |         | -1.0762298        | 1.0183468        | -0.0273298     | 0.3650480             | torch.Size([128])                |
| 2487    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.0(6)                   | output              | torch.float32 |         | -33.3243675       | 35.2951317       | -0.1358321     | 71.1993866            | torch.Size([2, 512, 128])        |
| 2488    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(6)                   | input               | torch.float32 |         | 0.0000000         | 35.2951317       | 2.9243307      | 25.8478928            | torch.Size([2, 512, 128])        |
| 2488    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.1(6)                   | output              | torch.float32 |         | 0.0000000         | 35.2951317       | 2.9243307      | 25.8478928            | torch.Size([2, 512, 128])        |
| 2489    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 35.2951317       | 2.9243307      | 25.8478928            | torch.Size([2, 512, 128])        |
| 2489    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.input_mean.mean(6)   | output              | torch.float32 |         | 0.3569790         | 7.3316345        | 2.9243307      | 3.8305118             | torch.Size([2, 512, 1])          |
| 2490    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 35.2951317       | 2.9243307      | 25.8478928            | torch.Size([2, 512, 128])        |
| 2490    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(6)               | input_1             | torch.float32 |         | 0.3569790         | 7.3316345        | 2.9243307      | 3.8305118             | torch.Size([2, 512, 1])          |
| 2490    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.2.sub(6)               | output              | torch.float32 |         | -7.3316345        | 29.6317406       | -0.0000000     | 22.0210934            | torch.Size([2, 512, 128])        |
| 2491    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(6)               | input_0             | torch.float32 |         | -7.3316345        | 29.6317406       | -0.0000000     | 22.0210934            | torch.Size([2, 512, 128])        |
| 2491    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(6)               | input_1             | torch.float32 |         | -7.3316345        | 29.6317406       | -0.0000000     | 22.0210934            | torch.Size([2, 512, 128])        |
| 2491    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.mul(6)               | output              | torch.float32 |         | 0.0000000         | 878.0400391      | 22.0209274     | 2639.7639160          | torch.Size([2, 512, 128])        |
| 2492    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 878.0400391      | 22.0209274     | 2639.7639160          | torch.Size([2, 512, 128])        |
| 2492    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.2.var_mean.mean(6)     | output              | torch.float32 |         | 0.2789748         | 78.4355164       | 22.0209274     | 463.8496704           | torch.Size([2, 512, 1])          |
| 2493    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(6)             | input               | torch.float32 |         | 0.2789748         | 78.4355164       | 22.0209274     | 463.8496704           | torch.Size([2, 512, 1])          |
| 2493    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.2.rsqrt(6)             | output              | torch.float32 |         | 0.1129129         | 1.8932577        | 0.6312324      | 0.4524490             | torch.Size([2, 512, 1])          |
| 2494    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(6)           | input_0             | torch.float32 |         | -7.3316345        | 29.6317406       | -0.0000000     | 22.0210934            | torch.Size([2, 512, 128])        |
| 2494    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(6)           | input_1             | torch.float32 |         | 0.1129129         | 1.8932577        | 0.6312324      | 0.4524490             | torch.Size([2, 512, 1])          |
| 2494    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.out_mul(6)           | output              | torch.float32 |         | -0.8841147        | 3.9556508        | -0.0000000     | 0.9999991             | torch.Size([2, 512, 128])        |
| 2495    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(6)      | input               | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 2495    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.weight_quant(6)      | output              | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 2496    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(6)        | input_0             | torch.float32 |         | -0.8841147        | 3.9556508        | -0.0000000     | 0.9999991             | torch.Size([2, 512, 128])        |
| 2496    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(6)        | input_1             | torch.float32 |         | 0.7278287         | 1.3287159        | 0.9627235      | 0.0086877             | torch.Size([128])                |
| 2496    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.2.weight_mul(6)        | output              | torch.float32 |         | -1.0490301        | 3.8576176        | -0.0038083     | 0.9292956             | torch.Size([2, 512, 128])        |
| 2497    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(6)        | input               | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 2497    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.2.bias_quant(6)        | output              | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 2498    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(6)          | input_0             | torch.float32 |         | -1.0490301        | 3.8576176        | -0.0038083     | 0.9292956             | torch.Size([2, 512, 128])        |
| 2498    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(6)          | input_1             | torch.float32 |         | -0.0562531        | 0.0804052        | 0.0088204      | 0.0005294             | torch.Size([128])                |
| 2498    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.2.bias_add(6)          | output              | torch.float32 |         | -1.0493263        | 3.8385780        | 0.0050121      | 0.9239565             | torch.Size([2, 512, 128])        |
| 2499    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(6)                   | input               | torch.float32 |         | -1.0493263        | 3.8385780        | 0.0050121      | 0.9239565             | torch.Size([2, 512, 128])        |
| 2499    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(6)                   | weight              | torch.float32 |         | -0.3750711        | 0.3968706        | 0.0019093      | 0.0048458             | torch.Size([128, 128])           |
| 2499    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(6)                   | bias                | torch.float32 |         | -0.1863807        | 0.1385574        | -0.0156467     | 0.0047256             | torch.Size([128])                |
| 2499    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.3(6)                   | output              | torch.float32 |         | -6.1676307        | 6.6421847        | -0.0949417     | 2.2005911             | torch.Size([2, 512, 128])        |
| 2500    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(6)                   | input               | torch.float32 |         | 0.0000000         | 6.6421847        | 0.5266311      | 0.7898608             | torch.Size([2, 512, 128])        |
| 2500    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.4(6)                   | output              | torch.float32 |         | 0.0000000         | 6.6421847        | 0.5266311      | 0.7898608             | torch.Size([2, 512, 128])        |
| 2501    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 6.6421847        | 0.5266311      | 0.7898608             | torch.Size([2, 512, 128])        |
| 2501    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.input_mean.mean(6)   | output              | torch.float32 |         | 0.2882320         | 0.9191173        | 0.5266311      | 0.0451839             | torch.Size([2, 512, 1])          |
| 2502    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 6.6421847        | 0.5266311      | 0.7898608             | torch.Size([2, 512, 128])        |
| 2502    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(6)               | input_1             | torch.float32 |         | 0.2882320         | 0.9191173        | 0.5266311      | 0.0451839             | torch.Size([2, 512, 1])          |
| 2502    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.5.sub(6)               | output              | torch.float32 |         | -0.9191173        | 5.7979374        | -0.0000000     | 0.7447208             | torch.Size([2, 512, 128])        |
| 2503    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(6)               | input_0             | torch.float32 |         | -0.9191173        | 5.7979374        | -0.0000000     | 0.7447208             | torch.Size([2, 512, 128])        |
| 2503    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(6)               | input_1             | torch.float32 |         | -0.9191173        | 5.7979374        | -0.0000000     | 0.7447208             | torch.Size([2, 512, 128])        |
| 2503    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.mul(6)               | output              | torch.float32 |         | 0.0000000         | 33.6160774       | 0.7447151      | 4.9712825             | torch.Size([2, 512, 128])        |
| 2504    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 33.6160774       | 0.7447151      | 4.9712825             | torch.Size([2, 512, 128])        |
| 2504    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.5.var_mean.mean(6)     | output              | torch.float32 |         | 0.3053117         | 1.6333234        | 0.7447151      | 0.2008183             | torch.Size([2, 512, 1])          |
| 2505    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(6)             | input               | torch.float32 |         | 0.3053117         | 1.6333234        | 0.7447151      | 0.2008183             | torch.Size([2, 512, 1])          |
| 2505    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.5.rsqrt(6)             | output              | torch.float32 |         | 0.7824607         | 1.8097607        | 1.2946649      | 0.0992868             | torch.Size([2, 512, 1])          |
| 2506    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(6)           | input_0             | torch.float32 |         | -0.9191173        | 5.7979374        | -0.0000000     | 0.7447208             | torch.Size([2, 512, 128])        |
| 2506    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(6)           | input_1             | torch.float32 |         | 0.7824607         | 1.8097607        | 1.2946649      | 0.0992868             | torch.Size([2, 512, 1])          |
| 2506    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.out_mul(6)           | output              | torch.float32 |         | -0.7630751        | 7.0179882        | -0.0000000     | 0.9999899             | torch.Size([2, 512, 128])        |
| 2507    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(6)      | input               | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 2507    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.weight_quant(6)      | output              | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 2508    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(6)        | input_0             | torch.float32 |         | -0.7630751        | 7.0179882        | -0.0000000     | 0.9999899             | torch.Size([2, 512, 128])        |
| 2508    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(6)        | input_1             | torch.float32 |         | 0.5925044         | 1.4726304        | 0.9182085      | 0.0175060             | torch.Size([128])                |
| 2508    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.5.weight_mul(6)        | output              | torch.float32 |         | -0.9011990        | 6.8960648        | 0.0317744      | 0.9313583             | torch.Size([2, 512, 128])        |
| 2509    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(6)        | input               | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 2509    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.5.bias_quant(6)        | output              | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 2510    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(6)          | input_0             | torch.float32 |         | -0.9011990        | 6.8960648        | 0.0317744      | 0.9313583             | torch.Size([2, 512, 128])        |
| 2510    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(6)          | input_1             | torch.float32 |         | -0.0644210        | 0.2426097        | 0.0318023      | 0.0030999             | torch.Size([128])                |
| 2510    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.5.bias_add(6)          | output              | torch.float32 |         | -0.9194145        | 6.8925204        | 0.0635767      | 0.9073082             | torch.Size([2, 512, 128])        |
| 2511    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(6)                   | input               | torch.float32 |         | -0.9194145        | 6.8925204        | 0.0635767      | 0.9073082             | torch.Size([2, 512, 128])        |
| 2511    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(6)                   | weight              | torch.float32 |         | -0.7504157        | 0.4182976        | -0.0024651     | 0.0052447             | torch.Size([128, 128])           |
| 2511    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(6)                   | bias                | torch.float32 |         | -0.1397866        | 0.1210779        | 0.0064616      | 0.0040949             | torch.Size([128])                |
| 2511    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.6(6)                   | output              | torch.float32 |         | -7.0614781        | 6.9472771        | -0.0389710     | 3.5665560             | torch.Size([2, 512, 128])        |
| 2512    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(6)                   | input               | torch.float32 |         | 0.0000000         | 6.9472771        | 0.7367713      | 1.2402860             | torch.Size([2, 512, 128])        |
| 2512    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.7(6)                   | output              | torch.float32 |         | 0.0000000         | 6.9472771        | 0.7367713      | 1.2402860             | torch.Size([2, 512, 128])        |
| 2513    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 6.9472771        | 0.7367713      | 1.2402860             | torch.Size([2, 512, 128])        |
| 2513    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.input_mean.mean(6)   | output              | torch.float32 |         | 0.5510896         | 0.9660646        | 0.7367713      | 0.0144233             | torch.Size([2, 512, 1])          |
| 2514    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 6.9472771        | 0.7367713      | 1.2402860             | torch.Size([2, 512, 128])        |
| 2514    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(6)               | input_1             | torch.float32 |         | 0.5510896         | 0.9660646        | 0.7367713      | 0.0144233             | torch.Size([2, 512, 1])          |
| 2514    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.8.sub(6)               | output              | torch.float32 |         | -0.9660646        | 6.1566887        | 0.0000000      | 1.2258766             | torch.Size([2, 512, 128])        |
| 2515    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(6)               | input_0             | torch.float32 |         | -0.9660646        | 6.1566887        | 0.0000000      | 1.2258766             | torch.Size([2, 512, 128])        |
| 2515    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(6)               | input_1             | torch.float32 |         | -0.9660646        | 6.1566887        | 0.0000000      | 1.2258766             | torch.Size([2, 512, 128])        |
| 2515    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.mul(6)               | output              | torch.float32 |         | 0.0000000         | 37.9048157       | 1.2258673      | 6.4293342             | torch.Size([2, 512, 128])        |
| 2516    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 37.9048157       | 1.2258673      | 6.4293342             | torch.Size([2, 512, 128])        |
| 2516    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.8.var_mean.mean(6)     | output              | torch.float32 |         | 0.8240253         | 1.8786762        | 1.2258673      | 0.0776516             | torch.Size([2, 512, 1])          |
| 2517    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(6)             | input               | torch.float32 |         | 0.8240253         | 1.8786762        | 1.2258673      | 0.0776516             | torch.Size([2, 512, 1])          |
| 2517    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.8.rsqrt(6)             | output              | torch.float32 |         | 0.7295799         | 1.1016080        | 0.9197865      | 0.0097261             | torch.Size([2, 512, 1])          |
| 2518    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(6)           | input_0             | torch.float32 |         | -0.9660646        | 6.1566887        | 0.0000000      | 1.2258766             | torch.Size([2, 512, 128])        |
| 2518    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(6)           | input_1             | torch.float32 |         | 0.7295799         | 1.1016080        | 0.9197865      | 0.0097261             | torch.Size([2, 512, 1])          |
| 2518    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.out_mul(6)           | output              | torch.float32 |         | -0.7466144        | 5.0255475        | -0.0000000     | 0.9999990             | torch.Size([2, 512, 128])        |
| 2519    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(6)      | input               | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 2519    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.weight_quant(6)      | output              | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 2520    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(6)        | input_0             | torch.float32 |         | -0.7466144        | 5.0255475        | -0.0000000     | 0.9999990             | torch.Size([2, 512, 128])        |
| 2520    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(6)        | input_1             | torch.float32 |         | 0.7673740         | 1.1249810        | 0.9671495      | 0.0053221             | torch.Size([128])                |
| 2520    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.8.weight_mul(6)        | output              | torch.float32 |         | -0.8399270        | 5.1820245        | 0.0140284      | 0.9860727             | torch.Size([2, 512, 128])        |
| 2521    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(6)        | input               | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 2521    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.8.bias_quant(6)        | output              | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 2522    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(6)          | input_0             | torch.float32 |         | -0.8399270        | 5.1820245        | 0.0140284      | 0.9860727             | torch.Size([2, 512, 128])        |
| 2522    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(6)          | input_1             | torch.float32 |         | -0.0537279        | 0.1594015        | 0.0216380      | 0.0014148             | torch.Size([128])                |
| 2522    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.8.bias_add(6)          | output              | torch.float32 |         | -0.8284876        | 5.2063422        | 0.0356664      | 0.9740493             | torch.Size([2, 512, 128])        |
| 2523    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(6)                   | input               | torch.float32 |         | -0.8284876        | 5.2063422        | 0.0356664      | 0.9740493             | torch.Size([2, 512, 128])        |
| 2523    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(6)                   | weight              | torch.float32 |         | -0.4264432        | 0.3183554        | 0.0005866      | 0.0053991             | torch.Size([128, 128])           |
| 2523    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(6)                   | bias                | torch.float32 |         | -0.1690418        | 0.1536980        | -0.0166056     | 0.0039884             | torch.Size([128])                |
| 2523    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.pos_fc.9(6)                   | output              | torch.float32 |         | -11.5562553       | 10.1570349       | -0.4556293     | 4.3660269             | torch.Size([2, 512, 128])        |
| 2524    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(6)                  | input               | torch.float32 |         | 0.0000000         | 10.1570349       | 0.6060451      | 1.5412680             | torch.Size([2, 512, 128])        |
| 2524    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.pos_fc.10(6)                  | output              | torch.float32 |         | 0.0000000         | 10.1570349       | 0.6060451      | 1.5412680             | torch.Size([2, 512, 128])        |
| 2525    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(6)  | input_0             | torch.float32 |         | 0.0000000         | 10.1570349       | 0.6060451      | 1.5412680             | torch.Size([2, 512, 128])        |
| 2525    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.input_mean.mean(6)  | output              | torch.float32 |         | 0.5247085         | 0.7269076        | 0.6060451      | 0.0018265             | torch.Size([2, 512, 1])          |
| 2526    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(6)              | input_0             | torch.float32 |         | 0.0000000         | 10.1570349       | 0.6060451      | 1.5412680             | torch.Size([2, 512, 128])        |
| 2526    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(6)              | input_1             | torch.float32 |         | 0.5247085         | 0.7269076        | 0.6060451      | 0.0018265             | torch.Size([2, 512, 1])          |
| 2526    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.pos_fc.11.sub(6)              | output              | torch.float32 |         | -0.7269076        | 9.6032410        | -0.0000000     | 1.5394433             | torch.Size([2, 512, 128])        |
| 2527    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(6)              | input_0             | torch.float32 |         | -0.7269076        | 9.6032410        | -0.0000000     | 1.5394433             | torch.Size([2, 512, 128])        |
| 2527    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(6)              | input_1             | torch.float32 |         | -0.7269076        | 9.6032410        | -0.0000000     | 1.5394433             | torch.Size([2, 512, 128])        |
| 2527    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.mul(6)              | output              | torch.float32 |         | 0.0000000         | 92.2222366       | 1.5394313      | 24.6622601            | torch.Size([2, 512, 128])        |
| 2528    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(6)    | input_0             | torch.float32 |         | 0.0000000         | 92.2222366       | 1.5394313      | 24.6622601            | torch.Size([2, 512, 128])        |
| 2528    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.pos_fc.11.var_mean.mean(6)    | output              | torch.float32 |         | 1.0282425         | 1.9427199        | 1.5394315      | 0.0456667             | torch.Size([2, 512, 1])          |
| 2529    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(6)            | input               | torch.float32 |         | 1.0282425         | 1.9427199        | 1.5394315      | 0.0456667             | torch.Size([2, 512, 1])          |
| 2529    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.pos_fc.11.rsqrt(6)            | output              | torch.float32 |         | 0.7174535         | 0.9861662        | 0.8119966      | 0.0033546             | torch.Size([2, 512, 1])          |
| 2530    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(6)          | input_0             | torch.float32 |         | -0.7269076        | 9.6032410        | -0.0000000     | 1.5394433             | torch.Size([2, 512, 128])        |
| 2530    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(6)          | input_1             | torch.float32 |         | 0.7174535         | 0.9861662        | 0.8119966      | 0.0033546             | torch.Size([2, 512, 1])          |
| 2530    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.out_mul(6)          | output              | torch.float32 |         | -0.5896393        | 7.3069472        | -0.0000000     | 1.0000010             | torch.Size([2, 512, 128])        |
| 2531    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(6)     | input               | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 2531    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.weight_quant(6)     | output              | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 2532    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(6)       | input_0             | torch.float32 |         | -0.5896393        | 7.3069472        | -0.0000000     | 1.0000010             | torch.Size([2, 512, 128])        |
| 2532    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(6)       | input_1             | torch.float32 |         | 0.7088336         | 1.4002132        | 0.9292046      | 0.0145085             | torch.Size([128])                |
| 2532    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.pos_fc.11.weight_mul(6)       | output              | torch.float32 |         | -0.8256208        | 7.3809738        | 0.0113894      | 0.9061615             | torch.Size([2, 512, 128])        |
| 2533    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(6)       | input               | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 2533    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.pos_fc.11.bias_quant(6)       | output              | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 2534    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(6)         | input_0             | torch.float32 |         | -0.8256208        | 7.3809738        | 0.0113894      | 0.9061615             | torch.Size([2, 512, 128])        |
| 2534    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(6)         | input_1             | torch.float32 |         | -0.0965041        | 0.2669707        | 0.0619903      | 0.0064956             | torch.Size([128])                |
| 2534    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.pos_fc.11.bias_add(6)         | output              | torch.float32 |         | -0.8297419        | 7.3336802        | 0.0733797      | 0.8607930             | torch.Size([2, 512, 128])        |
| 2535    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2535    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -1.0545198        | 2.8432088        | 0.1494899      | 0.4678711             | torch.Size([2, 512, 3])          |
| 2536    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(6)                  | input               | torch.float32 |         | -1.0545198        | 2.8432088        | 0.1494899      | 0.4678711             | torch.Size([2, 512, 3])          |
| 2536    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(6)                  | weight              | torch.float32 |         | -0.8288664        | 0.6362330        | 0.0683853      | 0.1118651             | torch.Size([32, 3])              |
| 2536    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(6)                  | bias                | torch.float32 |         | -0.5554879        | 0.5432062        | 0.0766153      | 0.1068659             | torch.Size([32])                 |
| 2536    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.0(6)                  | output              | torch.float32 |         | -2.0650144        | 2.4522481        | 0.0892404      | 0.2542883             | torch.Size([2, 512, 32])         |
| 2537    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(6)                  | input               | torch.float32 |         | 0.0000000         | 2.4522481        | 0.2494719      | 0.1017905             | torch.Size([2, 512, 32])         |
| 2537    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.1(6)                  | output              | torch.float32 |         | 0.0000000         | 2.4522481        | 0.2494719      | 0.1017905             | torch.Size([2, 512, 32])         |
| 2538    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(6)  | input_0             | torch.float32 |         | 0.0000000         | 2.4522481        | 0.2494719      | 0.1017905             | torch.Size([2, 512, 32])         |
| 2538    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.input_mean.mean(6)  | output              | torch.float32 |         | 0.1566585         | 0.7036554        | 0.2494719      | 0.0136874             | torch.Size([2, 512, 1])          |
| 2539    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(6)              | input_0             | torch.float32 |         | 0.0000000         | 2.4522481        | 0.2494719      | 0.1017905             | torch.Size([2, 512, 32])         |
| 2539    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(6)              | input_1             | torch.float32 |         | 0.1566585         | 0.7036554        | 0.2494719      | 0.0136874             | torch.Size([2, 512, 1])          |
| 2539    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.2.sub(6)              | output              | torch.float32 |         | -0.7036554        | 1.7485927        | -0.0000000     | 0.0881160             | torch.Size([2, 512, 32])         |
| 2540    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(6)              | input_0             | torch.float32 |         | -0.7036554        | 1.7485927        | -0.0000000     | 0.0881160             | torch.Size([2, 512, 32])         |
| 2540    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(6)              | input_1             | torch.float32 |         | -0.7036554        | 1.7485927        | -0.0000000     | 0.0881160             | torch.Size([2, 512, 32])         |
| 2540    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.mul(6)              | output              | torch.float32 |         | 0.0000000         | 3.0575767        | 0.0881133      | 0.0267471             | torch.Size([2, 512, 32])         |
| 2541    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(6)    | input_0             | torch.float32 |         | 0.0000000         | 3.0575767        | 0.0881133      | 0.0267471             | torch.Size([2, 512, 32])         |
| 2541    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.2.var_mean.mean(6)    | output              | torch.float32 |         | 0.0366491         | 0.4759865        | 0.0881133      | 0.0044899             | torch.Size([2, 512, 1])          |
| 2542    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(6)            | input               | torch.float32 |         | 0.0366491         | 0.4759865        | 0.0881133      | 0.0044899             | torch.Size([2, 512, 1])          |
| 2542    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.2.rsqrt(6)            | output              | torch.float32 |         | 1.4494330         | 5.2228675        | 3.8906879      | 1.0205014             | torch.Size([2, 512, 1])          |
| 2543    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(6)          | input_0             | torch.float32 |         | -0.7036554        | 1.7485927        | -0.0000000     | 0.0881160             | torch.Size([2, 512, 32])         |
| 2543    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(6)          | input_1             | torch.float32 |         | 1.4494330         | 5.2228675        | 3.8906879      | 1.0205014             | torch.Size([2, 512, 1])          |
| 2543    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.out_mul(6)          | output              | torch.float32 |         | -1.1025484        | 3.0954876        | -0.0000000     | 0.9998689             | torch.Size([2, 512, 32])         |
| 2544    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(6)     | input               | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 2544    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.weight_quant(6)     | output              | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 2545    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(6)       | input_0             | torch.float32 |         | -1.1025484        | 3.0954876        | -0.0000000     | 0.9998689             | torch.Size([2, 512, 32])         |
| 2545    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(6)       | input_1             | torch.float32 |         | 0.8401937         | 1.1936733        | 0.9969203      | 0.0071658             | torch.Size([32])                 |
| 2545    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.2.weight_mul(6)       | output              | torch.float32 |         | -1.2494434        | 3.2856121        | 0.0097183      | 0.9975855             | torch.Size([2, 512, 32])         |
| 2546    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(6)       | input               | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 2546    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.2.bias_quant(6)       | output              | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 2547    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(6)         | input_0             | torch.float32 |         | -1.2494434        | 3.2856121        | 0.0097183      | 0.9975855             | torch.Size([2, 512, 32])         |
| 2547    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(6)         | input_1             | torch.float32 |         | -0.1003950        | 0.1085345        | 0.0035262      | 0.0030721             | torch.Size([32])                 |
| 2547    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.2.bias_add(6)         | output              | torch.float32 |         | -1.2266053        | 3.2819912        | 0.0132445      | 0.9522620             | torch.Size([2, 512, 32])         |
| 2548    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(6)                  | input               | torch.float32 |         | -1.2266053        | 3.2819912        | 0.0132445      | 0.9522620             | torch.Size([2, 512, 32])         |
| 2548    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(6)                  | weight              | torch.float32 |         | -0.5793310        | 0.5422795        | -0.0032135     | 0.0176575             | torch.Size([32, 32])             |
| 2548    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(6)                  | bias                | torch.float32 |         | -0.1716317        | 0.2230143        | 0.0007250      | 0.0126328             | torch.Size([32])                 |
| 2548    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.3(6)                  | output              | torch.float32 |         | -4.4090352        | 2.1559558        | -0.2360992     | 1.4040204             | torch.Size([2, 512, 32])         |
| 2549    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(6)                  | input               | torch.float32 |         | 0.0000000         | 2.1559558        | 0.3590161      | 0.2506854             | torch.Size([2, 512, 32])         |
| 2549    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.4(6)                  | output              | torch.float32 |         | 0.0000000         | 2.1559558        | 0.3590161      | 0.2506854             | torch.Size([2, 512, 32])         |
| 2550    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(6)  | input_0             | torch.float32 |         | 0.0000000         | 2.1559558        | 0.3590161      | 0.2506854             | torch.Size([2, 512, 32])         |
| 2550    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.input_mean.mean(6)  | output              | torch.float32 |         | 0.2685128         | 0.4362455        | 0.3590161      | 0.0007917             | torch.Size([2, 512, 1])          |
| 2551    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(6)              | input_0             | torch.float32 |         | 0.0000000         | 2.1559558        | 0.3590161      | 0.2506854             | torch.Size([2, 512, 32])         |
| 2551    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(6)              | input_1             | torch.float32 |         | 0.2685128         | 0.4362455        | 0.3590161      | 0.0007917             | torch.Size([2, 512, 1])          |
| 2551    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.5.sub(6)              | output              | torch.float32 |         | -0.4362455        | 1.8137056        | 0.0000000      | 0.2498944             | torch.Size([2, 512, 32])         |
| 2552    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(6)              | input_0             | torch.float32 |         | -0.4362455        | 1.8137056        | 0.0000000      | 0.2498944             | torch.Size([2, 512, 32])         |
| 2552    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(6)              | input_1             | torch.float32 |         | -0.4362455        | 1.8137056        | 0.0000000      | 0.2498944             | torch.Size([2, 512, 32])         |
| 2552    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.mul(6)              | output              | torch.float32 |         | 0.0000000         | 3.2895279        | 0.2498868      | 0.1560035             | torch.Size([2, 512, 32])         |
| 2553    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(6)    | input_0             | torch.float32 |         | 0.0000000         | 3.2895279        | 0.2498868      | 0.1560035             | torch.Size([2, 512, 32])         |
| 2553    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.5.var_mean.mean(6)    | output              | torch.float32 |         | 0.1540799         | 0.3746600        | 0.2498868      | 0.0032473             | torch.Size([2, 512, 1])          |
| 2554    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(6)            | input               | torch.float32 |         | 0.1540799         | 0.3746600        | 0.2498868      | 0.0032473             | torch.Size([2, 512, 1])          |
| 2554    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.5.rsqrt(6)            | output              | torch.float32 |         | 1.6337121         | 2.5474930        | 2.0450406      | 0.0681189             | torch.Size([2, 512, 1])          |
| 2555    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(6)          | input_0             | torch.float32 |         | -0.4362455        | 1.8137056        | 0.0000000      | 0.2498944             | torch.Size([2, 512, 32])         |
| 2555    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(6)          | input_1             | torch.float32 |         | 1.6337121         | 2.5474930        | 2.0450406      | 0.0681189             | torch.Size([2, 512, 1])          |
| 2555    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.out_mul(6)          | output              | torch.float32 |         | -0.9100208        | 3.8228927        | 0.0000000      | 0.9999880             | torch.Size([2, 512, 32])         |
| 2556    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(6)     | input               | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 2556    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.weight_quant(6)     | output              | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 2557    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(6)       | input_0             | torch.float32 |         | -0.9100208        | 3.8228927        | 0.0000000      | 0.9999880             | torch.Size([2, 512, 32])         |
| 2557    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(6)       | input_1             | torch.float32 |         | 0.8191299         | 1.0923718        | 0.9808199      | 0.0031231             | torch.Size([32])                 |
| 2557    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.5.weight_mul(6)       | output              | torch.float32 |         | -0.9181557        | 3.6531634        | 0.0094903      | 0.9923508             | torch.Size([2, 512, 32])         |
| 2558    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(6)       | input               | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 2558    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.5.bias_quant(6)       | output              | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 2559    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(6)         | input_0             | torch.float32 |         | -0.9181557        | 3.6531634        | 0.0094903      | 0.9923508             | torch.Size([2, 512, 32])         |
| 2559    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(6)         | input_1             | torch.float32 |         | -0.0704119        | 0.0788569        | 0.0097621      | 0.0015200             | torch.Size([32])                 |
| 2559    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.5.bias_add(6)         | output              | torch.float32 |         | -0.9072347        | 3.6811860        | 0.0192524      | 0.9666710             | torch.Size([2, 512, 32])         |
| 2560    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(6)                  | input               | torch.float32 |         | -0.9072347        | 3.6811860        | 0.0192524      | 0.9666710             | torch.Size([2, 512, 32])         |
| 2560    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(6)                  | weight              | torch.float32 |         | -0.5712157        | 0.5219681        | -0.0062917     | 0.0166056             | torch.Size([32, 32])             |
| 2560    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(6)                  | bias                | torch.float32 |         | -0.1649730        | 0.2318604        | 0.0253026      | 0.0136139             | torch.Size([32])                 |
| 2560    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.6(6)                  | output              | torch.float32 |         | -4.5753741        | 2.6534457        | -0.1302278     | 1.2164214             | torch.Size([2, 512, 32])         |
| 2561    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(6)                  | input               | torch.float32 |         | 0.0000000         | 2.6534457        | 0.3682906      | 0.2753719             | torch.Size([2, 512, 32])         |
| 2561    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.7(6)                  | output              | torch.float32 |         | 0.0000000         | 2.6534457        | 0.3682906      | 0.2753719             | torch.Size([2, 512, 32])         |
| 2562    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(6)  | input_0             | torch.float32 |         | 0.0000000         | 2.6534457        | 0.3682906      | 0.2753719             | torch.Size([2, 512, 32])         |
| 2562    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.input_mean.mean(6)  | output              | torch.float32 |         | 0.1928140         | 0.4944715        | 0.3682905      | 0.0087835             | torch.Size([2, 512, 1])          |
| 2563    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(6)              | input_0             | torch.float32 |         | 0.0000000         | 2.6534457        | 0.3682906      | 0.2753719             | torch.Size([2, 512, 32])         |
| 2563    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(6)              | input_1             | torch.float32 |         | 0.1928140         | 0.4944715        | 0.3682905      | 0.0087835             | torch.Size([2, 512, 1])          |
| 2563    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.8.sub(6)              | output              | torch.float32 |         | -0.4944715        | 2.2124400        | 0.0000000      | 0.2665967             | torch.Size([2, 512, 32])         |
| 2564    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(6)              | input_0             | torch.float32 |         | -0.4944715        | 2.2124400        | 0.0000000      | 0.2665967             | torch.Size([2, 512, 32])         |
| 2564    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(6)              | input_1             | torch.float32 |         | -0.4944715        | 2.2124400        | 0.0000000      | 0.2665967             | torch.Size([2, 512, 32])         |
| 2564    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.mul(6)              | output              | torch.float32 |         | 0.0000000         | 4.8948908        | 0.2665886      | 0.3170105             | torch.Size([2, 512, 32])         |
| 2565    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(6)    | input_0             | torch.float32 |         | 0.0000000         | 4.8948908        | 0.2665886      | 0.3170105             | torch.Size([2, 512, 32])         |
| 2565    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.8.var_mean.mean(6)    | output              | torch.float32 |         | 0.1382290         | 0.3972616        | 0.2665885      | 0.0055096             | torch.Size([2, 512, 1])          |
| 2566    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(6)            | input               | torch.float32 |         | 0.1382290         | 0.3972616        | 0.2665885      | 0.0055096             | torch.Size([2, 512, 1])          |
| 2566    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.8.rsqrt(6)            | output              | torch.float32 |         | 1.5865591         | 2.6895812        | 2.0102391      | 0.1198892             | torch.Size([2, 512, 1])          |
| 2567    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(6)          | input_0             | torch.float32 |         | -0.4944715        | 2.2124400        | 0.0000000      | 0.2665967             | torch.Size([2, 512, 32])         |
| 2567    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(6)          | input_1             | torch.float32 |         | 1.5865591         | 2.6895812        | 2.0102391      | 0.1198892             | torch.Size([2, 512, 1])          |
| 2567    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.out_mul(6)          | output              | torch.float32 |         | -0.9264442        | 3.7793710        | -0.0000000     | 0.9999889             | torch.Size([2, 512, 32])         |
| 2568    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(6)     | input               | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 2568    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.weight_quant(6)     | output              | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 2569    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(6)       | input_0             | torch.float32 |         | -0.9264442        | 3.7793710        | -0.0000000     | 0.9999889             | torch.Size([2, 512, 32])         |
| 2569    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(6)       | input_1             | torch.float32 |         | 0.8903234         | 1.1315480        | 0.9912031      | 0.0026835             | torch.Size([32])                 |
| 2569    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.8.weight_mul(6)       | output              | torch.float32 |         | -1.0483161        | 4.0492587        | 0.0044791      | 1.0274113             | torch.Size([2, 512, 32])         |
| 2570    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(6)       | input               | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 2570    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.8.bias_quant(6)       | output              | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 2571    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(6)         | input_0             | torch.float32 |         | -1.0483161        | 4.0492587        | 0.0044791      | 1.0274113             | torch.Size([2, 512, 32])         |
| 2571    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(6)         | input_1             | torch.float32 |         | -0.0586081        | 0.0779655        | 0.0041962      | 0.0015323             | torch.Size([32])                 |
| 2571    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.8.bias_add(6)         | output              | torch.float32 |         | -1.0172354        | 4.0741873        | 0.0086753      | 1.0070920             | torch.Size([2, 512, 32])         |
| 2572    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(6)                  | input               | torch.float32 |         | -1.0172354        | 4.0741873        | 0.0086753      | 1.0070920             | torch.Size([2, 512, 32])         |
| 2572    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(6)                  | weight              | torch.float32 |         | -0.3204980        | 0.3365203        | -0.0020388     | 0.0145364             | torch.Size([32, 32])             |
| 2572    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(6)                  | bias                | torch.float32 |         | -0.1559148        | 0.2119379        | 0.0091616      | 0.0105488             | torch.Size([32])                 |
| 2572    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.size_fc.9(6)                  | output              | torch.float32 |         | -2.4724996        | 2.6751459        | 0.0217768      | 0.7669522             | torch.Size([2, 512, 32])         |
| 2573    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(6)                 | input               | torch.float32 |         | 0.0000000         | 2.6751459        | 0.3548756      | 0.2709665             | torch.Size([2, 512, 32])         |
| 2573    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.size_fc.10(6)                 | output              | torch.float32 |         | 0.0000000         | 2.6751459        | 0.3548756      | 0.2709665             | torch.Size([2, 512, 32])         |
| 2574    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(6) | input_0             | torch.float32 |         | 0.0000000         | 2.6751459        | 0.3548756      | 0.2709665             | torch.Size([2, 512, 32])         |
| 2574    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.input_mean.mean(6) | output              | torch.float32 |         | 0.2494594         | 0.5680424        | 0.3548756      | 0.0028709             | torch.Size([2, 512, 1])          |
| 2575    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(6)             | input_0             | torch.float32 |         | 0.0000000         | 2.6751459        | 0.3548756      | 0.2709665             | torch.Size([2, 512, 32])         |
| 2575    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(6)             | input_1             | torch.float32 |         | 0.2494594         | 0.5680424        | 0.3548756      | 0.0028709             | torch.Size([2, 512, 1])          |
| 2575    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.size_fc.11.sub(6)             | output              | torch.float32 |         | -0.5680424        | 2.2619691        | 0.0000000      | 0.2680984             | torch.Size([2, 512, 32])         |
| 2576    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(6)             | input_0             | torch.float32 |         | -0.5680424        | 2.2619691        | 0.0000000      | 0.2680984             | torch.Size([2, 512, 32])         |
| 2576    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(6)             | input_1             | torch.float32 |         | -0.5680424        | 2.2619691        | 0.0000000      | 0.2680984             | torch.Size([2, 512, 32])         |
| 2576    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.mul(6)             | output              | torch.float32 |         | 0.0000000         | 5.1165042        | 0.2680902      | 0.3424724             | torch.Size([2, 512, 32])         |
| 2577    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 5.1165042        | 0.2680902      | 0.3424724             | torch.Size([2, 512, 32])         |
| 2577    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.size_fc.11.var_mean.mean(6)   | output              | torch.float32 |         | 0.1814173         | 0.4032248        | 0.2680902      | 0.0020543             | torch.Size([2, 512, 1])          |
| 2578    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(6)           | input               | torch.float32 |         | 0.1814173         | 0.4032248        | 0.2680902      | 0.0020543             | torch.Size([2, 512, 1])          |
| 2578    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.size_fc.11.rsqrt(6)           | output              | torch.float32 |         | 1.5747839         | 2.3477325        | 1.9518483      | 0.0268706             | torch.Size([2, 512, 1])          |
| 2579    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(6)         | input_0             | torch.float32 |         | -0.5680424        | 2.2619691        | 0.0000000      | 0.2680984             | torch.Size([2, 512, 32])         |
| 2579    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(6)         | input_1             | torch.float32 |         | 1.5747839         | 2.3477325        | 1.9518483      | 0.0268706             | torch.Size([2, 512, 1])          |
| 2579    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.out_mul(6)         | output              | torch.float32 |         | -1.0667053        | 3.8194320        | -0.0000000     | 0.9999921             | torch.Size([2, 512, 32])         |
| 2580    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(6)    | input               | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 2580    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.weight_quant(6)    | output              | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 2581    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(6)      | input_0             | torch.float32 |         | -1.0667053        | 3.8194320        | -0.0000000     | 0.9999921             | torch.Size([2, 512, 32])         |
| 2581    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(6)      | input_1             | torch.float32 |         | 0.8289159         | 1.6609058        | 1.2561316      | 0.0353652             | torch.Size([32])                 |
| 2581    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.size_fc.11.weight_mul(6)      | output              | torch.float32 |         | -1.7716972        | 5.0025239        | -0.0218732     | 1.4697907             | torch.Size([2, 512, 32])         |
| 2582    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(6)      | input               | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 2582    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.size_fc.11.bias_quant(6)      | output              | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 2583    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(6)        | input_0             | torch.float32 |         | -1.7716972        | 5.0025239        | -0.0218732     | 1.4697907             | torch.Size([2, 512, 32])         |
| 2583    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(6)        | input_1             | torch.float32 |         | -0.1194881        | 0.2576658        | 0.0445686      | 0.0113612             | torch.Size([32])                 |
| 2583    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.size_fc.11.bias_add(6)        | output              | torch.float32 |         | -1.7223351        | 5.0518861        | 0.0226954      | 1.3956898             | torch.Size([2, 512, 32])         |
| 2584    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2584    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -1.1830406        | 1.1821816        | -0.0068282     | 0.2311063             | torch.Size([2, 512, 2])          |
| 2585    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(6)                   | input               | torch.float32 |         | -1.1830406        | 1.1821816        | -0.0068282     | 0.2311063             | torch.Size([2, 512, 2])          |
| 2585    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(6)                   | weight              | torch.float32 |         | -0.7023237        | 0.7394427        | 0.0490668      | 0.1972211             | torch.Size([32, 2])              |
| 2585    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(6)                   | bias                | torch.float32 |         | -0.7971504        | 0.6681666        | -0.1171320     | 0.1641774             | torch.Size([32])                 |
| 2585    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.0(6)                   | output              | torch.float32 |         | -1.6319782        | 1.3894981        | -0.1183359     | 0.2508473             | torch.Size([2, 512, 32])         |
| 2586    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(6)                   | input               | torch.float32 |         | 0.0000000         | 1.3894981        | 0.1524233      | 0.0681349             | torch.Size([2, 512, 32])         |
| 2586    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.1(6)                   | output              | torch.float32 |         | 0.0000000         | 1.3894981        | 0.1524233      | 0.0681349             | torch.Size([2, 512, 32])         |
| 2587    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 1.3894981        | 0.1524233      | 0.0681349             | torch.Size([2, 512, 32])         |
| 2587    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.input_mean.mean(6)   | output              | torch.float32 |         | 0.1084493         | 0.2572865        | 0.1524233      | 0.0012217             | torch.Size([2, 512, 1])          |
| 2588    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 1.3894981        | 0.1524233      | 0.0681349             | torch.Size([2, 512, 32])         |
| 2588    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(6)               | input_1             | torch.float32 |         | 0.1084493         | 0.2572865        | 0.1524233      | 0.0012217             | torch.Size([2, 512, 1])          |
| 2588    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.2.sub(6)               | output              | torch.float32 |         | -0.2572865        | 1.1420528        | 0.0000000      | 0.0669143             | torch.Size([2, 512, 32])         |
| 2589    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(6)               | input_0             | torch.float32 |         | -0.2572865        | 1.1420528        | 0.0000000      | 0.0669143             | torch.Size([2, 512, 32])         |
| 2589    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(6)               | input_1             | torch.float32 |         | -0.2572865        | 1.1420528        | 0.0000000      | 0.0669143             | torch.Size([2, 512, 32])         |
| 2589    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.mul(6)               | output              | torch.float32 |         | 0.0000000         | 1.3042846        | 0.0669123      | 0.0169946             | torch.Size([2, 512, 32])         |
| 2590    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 1.3042846        | 0.0669123      | 0.0169946             | torch.Size([2, 512, 32])         |
| 2590    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.2.var_mean.mean(6)     | output              | torch.float32 |         | 0.0405874         | 0.1393212        | 0.0669123      | 0.0005030             | torch.Size([2, 512, 1])          |
| 2591    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(6)             | input               | torch.float32 |         | 0.0405874         | 0.1393212        | 0.0669123      | 0.0005030             | torch.Size([2, 512, 1])          |
| 2591    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.2.rsqrt(6)             | output              | torch.float32 |         | 2.6790190         | 4.9630761        | 4.0086389      | 0.3436554             | torch.Size([2, 512, 1])          |
| 2592    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(6)           | input_0             | torch.float32 |         | -0.2572865        | 1.1420528        | 0.0000000      | 0.0669143             | torch.Size([2, 512, 32])         |
| 2592    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(6)           | input_1             | torch.float32 |         | 2.6790190         | 4.9630761        | 4.0086389      | 0.3436554             | torch.Size([2, 512, 1])          |
| 2592    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.out_mul(6)           | output              | torch.float32 |         | -0.7614864        | 4.0042624        | 0.0000000      | 0.9998664             | torch.Size([2, 512, 32])         |
| 2593    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(6)      | input               | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 2593    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.weight_quant(6)      | output              | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 2594    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(6)        | input_0             | torch.float32 |         | -0.7614864        | 4.0042624        | 0.0000000      | 0.9998664             | torch.Size([2, 512, 32])         |
| 2594    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(6)        | input_1             | torch.float32 |         | 0.8947600         | 1.1748335        | 0.9865216      | 0.0041537             | torch.Size([32])                 |
| 2594    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.2.weight_mul(6)        | output              | torch.float32 |         | -0.8663008        | 4.3359618        | 0.0032257      | 1.0000809             | torch.Size([2, 512, 32])         |
| 2595    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(6)        | input               | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 2595    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.2.bias_quant(6)        | output              | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 2596    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(6)          | input_0             | torch.float32 |         | -0.8663008        | 4.3359618        | 0.0032257      | 1.0000809             | torch.Size([2, 512, 32])         |
| 2596    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(6)          | input_1             | torch.float32 |         | -0.0879948        | 0.1319895        | 0.0285039      | 0.0034159             | torch.Size([32])                 |
| 2596    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.2.bias_add(6)          | output              | torch.float32 |         | -0.8110147        | 4.2555003        | 0.0317296      | 0.9257158             | torch.Size([2, 512, 32])         |
| 2597    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(6)                   | input               | torch.float32 |         | -0.8110147        | 4.2555003        | 0.0317296      | 0.9257158             | torch.Size([2, 512, 32])         |
| 2597    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(6)                   | weight              | torch.float32 |         | -1.0547366        | 0.5812716        | 0.0070099      | 0.0187704             | torch.Size([32, 32])             |
| 2597    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(6)                   | bias                | torch.float32 |         | -0.2183180        | 0.1396109        | -0.0140744     | 0.0103446             | torch.Size([32])                 |
| 2597    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.3(6)                   | output              | torch.float32 |         | -5.3508711        | 1.7011070        | -0.4815490     | 1.4273827             | torch.Size([2, 512, 32])         |
| 2598    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(6)                   | input               | torch.float32 |         | 0.0000000         | 1.7011070        | 0.2296669      | 0.1283039             | torch.Size([2, 512, 32])         |
| 2598    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.4(6)                   | output              | torch.float32 |         | 0.0000000         | 1.7011070        | 0.2296669      | 0.1283039             | torch.Size([2, 512, 32])         |
| 2599    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 1.7011070        | 0.2296669      | 0.1283039             | torch.Size([2, 512, 32])         |
| 2599    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.input_mean.mean(6)   | output              | torch.float32 |         | 0.1696737         | 0.3847970        | 0.2296669      | 0.0015737             | torch.Size([2, 512, 1])          |
| 2600    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 1.7011070        | 0.2296669      | 0.1283039             | torch.Size([2, 512, 32])         |
| 2600    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(6)               | input_1             | torch.float32 |         | 0.1696737         | 0.3847970        | 0.2296669      | 0.0015737             | torch.Size([2, 512, 1])          |
| 2600    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.5.sub(6)               | output              | torch.float32 |         | -0.3847970        | 1.4374099        | -0.0000000     | 0.1267317             | torch.Size([2, 512, 32])         |
| 2601    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(6)               | input_0             | torch.float32 |         | -0.3847970        | 1.4374099        | -0.0000000     | 0.1267317             | torch.Size([2, 512, 32])         |
| 2601    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(6)               | input_1             | torch.float32 |         | -0.3847970        | 1.4374099        | -0.0000000     | 0.1267317             | torch.Size([2, 512, 32])         |
| 2601    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.mul(6)               | output              | torch.float32 |         | 0.0000000         | 2.0661471        | 0.1267278      | 0.0508211             | torch.Size([2, 512, 32])         |
| 2602    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 2.0661471        | 0.1267278      | 0.0508211             | torch.Size([2, 512, 32])         |
| 2602    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.5.var_mean.mean(6)     | output              | torch.float32 |         | 0.0749367         | 0.2820674        | 0.1267278      | 0.0012341             | torch.Size([2, 512, 1])          |
| 2603    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(6)             | input               | torch.float32 |         | 0.0749367         | 0.2820674        | 0.1267278      | 0.0012341             | torch.Size([2, 512, 1])          |
| 2603    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.5.rsqrt(6)             | output              | torch.float32 |         | 1.8828505         | 3.6527824        | 2.8788414      | 0.1207846             | torch.Size([2, 512, 1])          |
| 2604    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(6)           | input_0             | torch.float32 |         | -0.3847970        | 1.4374099        | -0.0000000     | 0.1267317             | torch.Size([2, 512, 32])         |
| 2604    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(6)           | input_1             | torch.float32 |         | 1.8828505         | 3.6527824        | 2.8788414      | 0.1207846             | torch.Size([2, 512, 1])          |
| 2604    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.out_mul(6)           | output              | torch.float32 |         | -0.8418084        | 3.5714390        | 0.0000000      | 0.9999465             | torch.Size([2, 512, 32])         |
| 2605    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(6)      | input               | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 2605    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.weight_quant(6)      | output              | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 2606    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(6)        | input_0             | torch.float32 |         | -0.8418084        | 3.5714390        | 0.0000000      | 0.9999465             | torch.Size([2, 512, 32])         |
| 2606    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(6)        | input_1             | torch.float32 |         | 0.8550419         | 1.1198171        | 0.9805899      | 0.0036729             | torch.Size([32])                 |
| 2606    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.5.weight_mul(6)        | output              | torch.float32 |         | -0.9155545        | 3.6445916        | -0.0015007     | 0.9733175             | torch.Size([2, 512, 32])         |
| 2607    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(6)        | input               | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 2607    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.5.bias_quant(6)        | output              | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 2608    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(6)          | input_0             | torch.float32 |         | -0.9155545        | 3.6445916        | -0.0015007     | 0.9733175             | torch.Size([2, 512, 32])         |
| 2608    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(6)          | input_1             | torch.float32 |         | -0.0792132        | 0.1045145        | 0.0242442      | 0.0021608             | torch.Size([32])                 |
| 2608    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.5.bias_add(6)          | output              | torch.float32 |         | -0.8503588        | 3.6037753        | 0.0227435      | 0.9241350             | torch.Size([2, 512, 32])         |
| 2609    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(6)                   | input               | torch.float32 |         | -0.8503588        | 3.6037753        | 0.0227435      | 0.9241350             | torch.Size([2, 512, 32])         |
| 2609    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(6)                   | weight              | torch.float32 |         | -0.4480607        | 0.3678726        | 0.0004879      | 0.0160908             | torch.Size([32, 32])             |
| 2609    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(6)                   | bias                | torch.float32 |         | -0.1861591        | 0.1739754        | 0.0155446      | 0.0137690             | torch.Size([32])                 |
| 2609    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.6(6)                   | output              | torch.float32 |         | -3.6835513        | 2.4217055        | -0.2382872     | 1.3090295             | torch.Size([2, 512, 32])         |
| 2610    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(6)                   | input               | torch.float32 |         | 0.0000000         | 2.4217055        | 0.3338123      | 0.2101809             | torch.Size([2, 512, 32])         |
| 2610    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.7(6)                   | output              | torch.float32 |         | 0.0000000         | 2.4217055        | 0.3338123      | 0.2101809             | torch.Size([2, 512, 32])         |
| 2611    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 2.4217055        | 0.3338123      | 0.2101809             | torch.Size([2, 512, 32])         |
| 2611    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.input_mean.mean(6)   | output              | torch.float32 |         | 0.2458088         | 0.5106908        | 0.3338123      | 0.0013067             | torch.Size([2, 512, 1])          |
| 2612    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 2.4217055        | 0.3338123      | 0.2101809             | torch.Size([2, 512, 32])         |
| 2612    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(6)               | input_1             | torch.float32 |         | 0.2458088         | 0.5106908        | 0.3338123      | 0.0013067             | torch.Size([2, 512, 1])          |
| 2612    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.8.sub(6)               | output              | torch.float32 |         | -0.5106908        | 2.1434438        | -0.0000000     | 0.2088754             | torch.Size([2, 512, 32])         |
| 2613    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(6)               | input_0             | torch.float32 |         | -0.5106908        | 2.1434438        | -0.0000000     | 0.2088754             | torch.Size([2, 512, 32])         |
| 2613    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(6)               | input_1             | torch.float32 |         | -0.5106908        | 2.1434438        | -0.0000000     | 0.2088754             | torch.Size([2, 512, 32])         |
| 2613    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.mul(6)               | output              | torch.float32 |         | 0.0000000         | 4.5943513        | 0.2088690      | 0.1446744             | torch.Size([2, 512, 32])         |
| 2614    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 4.5943513        | 0.2088690      | 0.1446744             | torch.Size([2, 512, 32])         |
| 2614    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.8.var_mean.mean(6)     | output              | torch.float32 |         | 0.1600288         | 0.3820646        | 0.2088690      | 0.0007939             | torch.Size([2, 512, 1])          |
| 2615    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(6)             | input               | torch.float32 |         | 0.1600288         | 0.3820646        | 0.2088690      | 0.0007939             | torch.Size([2, 512, 1])          |
| 2615    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.8.rsqrt(6)             | output              | torch.float32 |         | 1.6178041         | 2.4996970        | 2.2018843      | 0.0193966             | torch.Size([2, 512, 1])          |
| 2616    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(6)           | input_0             | torch.float32 |         | -0.5106908        | 2.1434438        | -0.0000000     | 0.2088754             | torch.Size([2, 512, 32])         |
| 2616    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(6)           | input_1             | torch.float32 |         | 1.6178041         | 2.4996970        | 2.2018843      | 0.0193966             | torch.Size([2, 512, 1])          |
| 2616    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.out_mul(6)           | output              | torch.float32 |         | -0.8640291        | 4.4140210        | -0.0000000     | 0.9999819             | torch.Size([2, 512, 32])         |
| 2617    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(6)      | input               | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 2617    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.weight_quant(6)      | output              | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 2618    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(6)        | input_0             | torch.float32 |         | -0.8640291        | 4.4140210        | -0.0000000     | 0.9999819             | torch.Size([2, 512, 32])         |
| 2618    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(6)        | input_1             | torch.float32 |         | 0.8469434         | 1.1090456        | 0.9866461      | 0.0031007             | torch.Size([32])                 |
| 2618    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.8.weight_mul(6)        | output              | torch.float32 |         | -0.9582477        | 4.7060094        | -0.0004243     | 0.9977639             | torch.Size([2, 512, 32])         |
| 2619    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(6)        | input               | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 2619    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.8.bias_quant(6)        | output              | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 2620    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(6)          | input_0             | torch.float32 |         | -0.9582477        | 4.7060094        | -0.0004243     | 0.9977639             | torch.Size([2, 512, 32])         |
| 2620    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(6)          | input_1             | torch.float32 |         | -0.0626723        | 0.0887763        | 0.0071697      | 0.0011301             | torch.Size([32])                 |
| 2620    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.8.bias_add(6)          | output              | torch.float32 |         | -0.9567643        | 4.7276154        | 0.0067454      | 0.9778799             | torch.Size([2, 512, 32])         |
| 2621    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(6)                   | input               | torch.float32 |         | -0.9567643        | 4.7276154        | 0.0067454      | 0.9778799             | torch.Size([2, 512, 32])         |
| 2621    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(6)                   | weight              | torch.float32 |         | -0.5597425        | 0.7001730        | 0.0015679      | 0.0160348             | torch.Size([32, 32])             |
| 2621    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(6)                   | bias                | torch.float32 |         | -0.1810580        | 0.1736723        | -0.0279047     | 0.0091159             | torch.Size([32])                 |
| 2621    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.yaw_fc.9(6)                   | output              | torch.float32 |         | -4.3366466        | 3.4908969        | -0.2355871     | 1.1426536             | torch.Size([2, 512, 32])         |
| 2622    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(6)                  | input               | torch.float32 |         | 0.0000000         | 3.4908969        | 0.2863230      | 0.2876301             | torch.Size([2, 512, 32])         |
| 2622    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.yaw_fc.10(6)                  | output              | torch.float32 |         | 0.0000000         | 3.4908969        | 0.2863230      | 0.2876301             | torch.Size([2, 512, 32])         |
| 2623    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(6)  | input_0             | torch.float32 |         | 0.0000000         | 3.4908969        | 0.2863230      | 0.2876301             | torch.Size([2, 512, 32])         |
| 2623    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.input_mean.mean(6)  | output              | torch.float32 |         | 0.2230711         | 0.3991627        | 0.2863230      | 0.0015075             | torch.Size([2, 512, 1])          |
| 2624    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(6)              | input_0             | torch.float32 |         | 0.0000000         | 3.4908969        | 0.2863230      | 0.2876301             | torch.Size([2, 512, 32])         |
| 2624    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(6)              | input_1             | torch.float32 |         | 0.2230711         | 0.3991627        | 0.2863230      | 0.0015075             | torch.Size([2, 512, 1])          |
| 2624    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.yaw_fc.11.sub(6)              | output              | torch.float32 |         | -0.3991627        | 3.1998949        | -0.0000000     | 0.2861240             | torch.Size([2, 512, 32])         |
| 2625    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(6)              | input_0             | torch.float32 |         | -0.3991627        | 3.1998949        | -0.0000000     | 0.2861240             | torch.Size([2, 512, 32])         |
| 2625    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(6)              | input_1             | torch.float32 |         | -0.3991627        | 3.1998949        | -0.0000000     | 0.2861240             | torch.Size([2, 512, 32])         |
| 2625    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.mul(6)              | output              | torch.float32 |         | 0.0000000         | 10.2393274       | 0.2861153      | 0.8463002             | torch.Size([2, 512, 32])         |
| 2626    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(6)    | input_0             | torch.float32 |         | 0.0000000         | 10.2393274       | 0.2861153      | 0.8463002             | torch.Size([2, 512, 32])         |
| 2626    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.yaw_fc.11.var_mean.mean(6)    | output              | torch.float32 |         | 0.1364546         | 0.4469218        | 0.2861153      | 0.0068622             | torch.Size([2, 512, 1])          |
| 2627    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(6)            | input               | torch.float32 |         | 0.1364546         | 0.4469218        | 0.2861153      | 0.0068622             | torch.Size([2, 512, 1])          |
| 2627    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.yaw_fc.11.rsqrt(6)            | output              | torch.float32 |         | 1.4958200         | 2.7070105        | 1.9333916      | 0.0882364             | torch.Size([2, 512, 1])          |
| 2628    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(6)          | input_0             | torch.float32 |         | -0.3991627        | 3.1998949        | -0.0000000     | 0.2861240             | torch.Size([2, 512, 32])         |
| 2628    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(6)          | input_1             | torch.float32 |         | 1.4958200         | 2.7070105        | 1.9333916      | 0.0882364             | torch.Size([2, 512, 1])          |
| 2628    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.out_mul(6)          | output              | torch.float32 |         | -0.7878976        | 4.8558912        | 0.0000000      | 0.9999921             | torch.Size([2, 512, 32])         |
| 2629    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(6)     | input               | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 2629    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.weight_quant(6)     | output              | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 2630    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(6)       | input_0             | torch.float32 |         | -0.7878976        | 4.8558912        | 0.0000000      | 0.9999921             | torch.Size([2, 512, 32])         |
| 2630    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(6)       | input_1             | torch.float32 |         | 0.8363900         | 1.4688344        | 1.0570920      | 0.0396277             | torch.Size([32])                 |
| 2630    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.yaw_fc.11.weight_mul(6)       | output              | torch.float32 |         | -1.1572911        | 4.8862367        | -0.0498900     | 0.9441681             | torch.Size([2, 512, 32])         |
| 2631    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(6)       | input               | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 2631    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.yaw_fc.11.bias_quant(6)       | output              | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 2632    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(6)         | input_0             | torch.float32 |         | -1.1572911        | 4.8862367        | -0.0498900     | 0.9441681             | torch.Size([2, 512, 32])         |
| 2632    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(6)         | input_1             | torch.float32 |         | -0.1492936        | 0.2842544        | 0.0803791      | 0.0109446             | torch.Size([32])                 |
| 2632    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.yaw_fc.11.bias_add(6)         | output              | torch.float32 |         | -0.9720487        | 4.8291078        | 0.0304891      | 0.8755097             | torch.Size([2, 512, 32])         |
| 2633    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | input_0             | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2633    | torch.Tensor.__getitem__                                                          | head.anchor_encoder                               | output              | torch.float32 |         | -12.2548752       | 9.2566652        | -0.1938958     | 2.4223375             | torch.Size([2, 512, 3])          |
| 2634    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(6)                   | input               | torch.float32 |         | -12.2548752       | 9.2566652        | -0.1938958     | 2.4223375             | torch.Size([2, 512, 3])          |
| 2634    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(6)                   | weight              | torch.float32 |         | -1.0475703        | 0.9848034        | -0.0054673     | 0.2080412             | torch.Size([64, 3])              |
| 2634    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(6)                   | bias                | torch.float32 |         | -0.8030427        | 0.5068271        | -0.0504076     | 0.1294928             | torch.Size([64])                 |
| 2634    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.0(6)                   | output              | torch.float32 |         | -11.8174467       | 13.4691362       | -0.0981765     | 1.8210133             | torch.Size([2, 512, 64])         |
| 2635    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(6)                   | input               | torch.float32 |         | 0.0000000         | 13.4691362       | 0.2976067      | 0.7224927             | torch.Size([2, 512, 64])         |
| 2635    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.1(6)                   | output              | torch.float32 |         | 0.0000000         | 13.4691362       | 0.2976067      | 0.7224927             | torch.Size([2, 512, 64])         |
| 2636    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 13.4691362       | 0.2976067      | 0.7224927             | torch.Size([2, 512, 64])         |
| 2636    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.input_mean.mean(6)   | output              | torch.float32 |         | 0.1191292         | 2.3375967        | 0.2976067      | 0.1568380             | torch.Size([2, 512, 1])          |
| 2637    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 13.4691362       | 0.2976067      | 0.7224927             | torch.Size([2, 512, 64])         |
| 2637    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(6)               | input_1             | torch.float32 |         | 0.1191292         | 2.3375967        | 0.2976067      | 0.1568380             | torch.Size([2, 512, 1])          |
| 2637    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.2.sub(6)               | output              | torch.float32 |         | -2.3375967        | 11.1527119       | -0.0000000     | 0.5658054             | torch.Size([2, 512, 64])         |
| 2638    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(6)               | input_0             | torch.float32 |         | -2.3375967        | 11.1527119       | -0.0000000     | 0.5658054             | torch.Size([2, 512, 64])         |
| 2638    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(6)               | input_1             | torch.float32 |         | -2.3375967        | 11.1527119       | -0.0000000     | 0.5658054             | torch.Size([2, 512, 64])         |
| 2638    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.mul(6)               | output              | torch.float32 |         | 0.0000000         | 124.3829803      | 0.5657969      | 15.0130625            | torch.Size([2, 512, 64])         |
| 2639    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 124.3829803      | 0.5657969      | 15.0130625            | torch.Size([2, 512, 64])         |
| 2639    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.2.var_mean.mean(6)     | output              | torch.float32 |         | 0.0269089         | 14.0522423       | 0.5657969      | 3.7963212             | torch.Size([2, 512, 1])          |
| 2640    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(6)             | input               | torch.float32 |         | 0.0269089         | 14.0522423       | 0.5657969      | 3.7963212             | torch.Size([2, 512, 1])          |
| 2640    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.2.rsqrt(6)             | output              | torch.float32 |         | 0.2667639         | 6.0949669        | 4.1213675      | 2.9138350             | torch.Size([2, 512, 1])          |
| 2641    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(6)           | input_0             | torch.float32 |         | -2.3375967        | 11.1527119       | -0.0000000     | 0.5658054             | torch.Size([2, 512, 64])         |
| 2641    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(6)           | input_1             | torch.float32 |         | 0.2667639         | 6.0949669        | 4.1213675      | 2.9138350             | torch.Size([2, 512, 1])          |
| 2641    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.out_mul(6)           | output              | torch.float32 |         | -0.8859782        | 3.9041584        | -0.0000000     | 0.9998163             | torch.Size([2, 512, 64])         |
| 2642    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(6)      | input               | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 2642    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.weight_quant(6)      | output              | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 2643    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(6)        | input_0             | torch.float32 |         | -0.8859782        | 3.9041584        | -0.0000000     | 0.9998163             | torch.Size([2, 512, 64])         |
| 2643    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(6)        | input_1             | torch.float32 |         | 0.8691067         | 1.1281288        | 0.9794419      | 0.0036082             | torch.Size([64])                 |
| 2643    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.2.weight_mul(6)        | output              | torch.float32 |         | -0.9994975        | 3.8820548        | 0.0117961      | 0.9611433             | torch.Size([2, 512, 64])         |
| 2644    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(6)        | input               | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 2644    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.2.bias_quant(6)        | output              | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 2645    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(6)          | input_0             | torch.float32 |         | -0.9994975        | 3.8820548        | 0.0117961      | 0.9611433             | torch.Size([2, 512, 64])         |
| 2645    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(6)          | input_1             | torch.float32 |         | -0.1133662        | 0.1493634        | 0.0304540      | 0.0046508             | torch.Size([64])                 |
| 2645    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.2.bias_add(6)          | output              | torch.float32 |         | -0.9935482        | 3.8539948        | 0.0422501      | 0.8824897             | torch.Size([2, 512, 64])         |
| 2646    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(6)                   | input               | torch.float32 |         | -0.9935482        | 3.8539948        | 0.0422501      | 0.8824897             | torch.Size([2, 512, 64])         |
| 2646    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(6)                   | weight              | torch.float32 |         | -0.4523612        | 0.4813256        | -0.0014562     | 0.0096743             | torch.Size([64, 64])             |
| 2646    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(6)                   | bias                | torch.float32 |         | -0.1183558        | 0.2243176        | 0.0150283      | 0.0049289             | torch.Size([64])                 |
| 2646    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.3(6)                   | output              | torch.float32 |         | -5.3581324        | 4.0145082        | -0.3682509     | 2.1466908             | torch.Size([2, 512, 64])         |
| 2647    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(6)                   | input               | torch.float32 |         | 0.0000000         | 4.0145082        | 0.3636136      | 0.2969271             | torch.Size([2, 512, 64])         |
| 2647    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.4(6)                   | output              | torch.float32 |         | 0.0000000         | 4.0145082        | 0.3636136      | 0.2969271             | torch.Size([2, 512, 64])         |
| 2648    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 4.0145082        | 0.3636136      | 0.2969271             | torch.Size([2, 512, 64])         |
| 2648    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.input_mean.mean(6)   | output              | torch.float32 |         | 0.2111111         | 0.6607168        | 0.3636136      | 0.0143073             | torch.Size([2, 512, 1])          |
| 2649    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 4.0145082        | 0.3636136      | 0.2969271             | torch.Size([2, 512, 64])         |
| 2649    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(6)               | input_1             | torch.float32 |         | 0.2111111         | 0.6607168        | 0.3636136      | 0.0143073             | torch.Size([2, 512, 1])          |
| 2649    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.5.sub(6)               | output              | torch.float32 |         | -0.6607168        | 3.4446716        | -0.0000000     | 0.2826335             | torch.Size([2, 512, 64])         |
| 2650    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(6)               | input_0             | torch.float32 |         | -0.6607168        | 3.4446716        | -0.0000000     | 0.2826335             | torch.Size([2, 512, 64])         |
| 2650    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(6)               | input_1             | torch.float32 |         | -0.6607168        | 3.4446716        | -0.0000000     | 0.2826335             | torch.Size([2, 512, 64])         |
| 2650    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.mul(6)               | output              | torch.float32 |         | 0.0000000         | 11.8657627       | 0.2826293      | 0.4143830             | torch.Size([2, 512, 64])         |
| 2651    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 11.8657627       | 0.2826293      | 0.4143830             | torch.Size([2, 512, 64])         |
| 2651    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.5.var_mean.mean(6)     | output              | torch.float32 |         | 0.0837630         | 1.0005482        | 0.2826293      | 0.0332193             | torch.Size([2, 512, 1])          |
| 2652    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(6)             | input               | torch.float32 |         | 0.0837630         | 1.0005482        | 0.2826293      | 0.0332193             | torch.Size([2, 512, 1])          |
| 2652    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.5.rsqrt(6)             | output              | torch.float32 |         | 0.9997209         | 3.4549992        | 2.2147906      | 0.5781276             | torch.Size([2, 512, 1])          |
| 2653    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(6)           | input_0             | torch.float32 |         | -0.6607168        | 3.4446716        | -0.0000000     | 0.2826335             | torch.Size([2, 512, 64])         |
| 2653    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(6)           | input_1             | torch.float32 |         | 0.9997209         | 3.4549992        | 2.2147906      | 0.5781276             | torch.Size([2, 512, 1])          |
| 2653    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.out_mul(6)           | output              | torch.float32 |         | -0.8840482        | 4.2364254        | 0.0000000      | 0.9999605             | torch.Size([2, 512, 64])         |
| 2654    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(6)      | input               | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 2654    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.weight_quant(6)      | output              | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 2655    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(6)        | input_0             | torch.float32 |         | -0.8840482        | 4.2364254        | 0.0000000      | 0.9999605             | torch.Size([2, 512, 64])         |
| 2655    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(6)        | input_1             | torch.float32 |         | 0.8333027         | 1.1388558        | 0.9778216      | 0.0042186             | torch.Size([64])                 |
| 2655    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.5.weight_mul(6)        | output              | torch.float32 |         | -0.9456891        | 4.2149658        | 0.0051442      | 0.9865630             | torch.Size([2, 512, 64])         |
| 2656    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(6)        | input               | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 2656    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.5.bias_quant(6)        | output              | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 2657    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(6)          | input_0             | torch.float32 |         | -0.9456891        | 4.2149658        | 0.0051442      | 0.9865630             | torch.Size([2, 512, 64])         |
| 2657    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(6)          | input_1             | torch.float32 |         | -0.0757831        | 0.1161729        | 0.0164943      | 0.0016283             | torch.Size([64])                 |
| 2657    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.5.bias_add(6)          | output              | torch.float32 |         | -0.9090101        | 4.2016416        | 0.0216385      | 0.9493248             | torch.Size([2, 512, 64])         |
| 2658    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(6)                   | input               | torch.float32 |         | -0.9090101        | 4.2016416        | 0.0216385      | 0.9493248             | torch.Size([2, 512, 64])         |
| 2658    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(6)                   | weight              | torch.float32 |         | -0.5707353        | 0.3620123        | -0.0010372     | 0.0088292             | torch.Size([64, 64])             |
| 2658    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(6)                   | bias                | torch.float32 |         | -0.1720246        | 0.1340137        | -0.0235144     | 0.0050507             | torch.Size([64])                 |
| 2658    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.6(6)                   | output              | torch.float32 |         | -5.3050771        | 3.7161427        | -0.2846929     | 1.9672304             | torch.Size([2, 512, 64])         |
| 2659    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(6)                   | input               | torch.float32 |         | 0.0000000         | 3.7161427        | 0.4422377      | 0.4816613             | torch.Size([2, 512, 64])         |
| 2659    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.7(6)                   | output              | torch.float32 |         | 0.0000000         | 3.7161427        | 0.4422377      | 0.4816613             | torch.Size([2, 512, 64])         |
| 2660    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(6)   | input_0             | torch.float32 |         | 0.0000000         | 3.7161427        | 0.4422377      | 0.4816613             | torch.Size([2, 512, 64])         |
| 2660    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.input_mean.mean(6)   | output              | torch.float32 |         | 0.3352436         | 0.5185297        | 0.4422377      | 0.0020981             | torch.Size([2, 512, 1])          |
| 2661    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(6)               | input_0             | torch.float32 |         | 0.0000000         | 3.7161427        | 0.4422377      | 0.4816613             | torch.Size([2, 512, 64])         |
| 2661    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(6)               | input_1             | torch.float32 |         | 0.3352436         | 0.5185297        | 0.4422377      | 0.0020981             | torch.Size([2, 512, 1])          |
| 2661    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.8.sub(6)               | output              | torch.float32 |         | -0.5185297        | 3.2264094        | -0.0000000     | 0.4795651             | torch.Size([2, 512, 64])         |
| 2662    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(6)               | input_0             | torch.float32 |         | -0.5185297        | 3.2264094        | -0.0000000     | 0.4795651             | torch.Size([2, 512, 64])         |
| 2662    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(6)               | input_1             | torch.float32 |         | -0.5185297        | 3.2264094        | -0.0000000     | 0.4795651             | torch.Size([2, 512, 64])         |
| 2662    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.mul(6)               | output              | torch.float32 |         | 0.0000000         | 10.4097176       | 0.4795578      | 0.9824643             | torch.Size([2, 512, 64])         |
| 2663    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(6)     | input_0             | torch.float32 |         | 0.0000000         | 10.4097176       | 0.4795578      | 0.9824643             | torch.Size([2, 512, 64])         |
| 2663    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.8.var_mean.mean(6)     | output              | torch.float32 |         | 0.2531232         | 0.7325321        | 0.4795578      | 0.0104399             | torch.Size([2, 512, 1])          |
| 2664    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(6)             | input               | torch.float32 |         | 0.2531232         | 0.7325321        | 0.4795578      | 0.0104399             | torch.Size([2, 512, 1])          |
| 2664    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.8.rsqrt(6)             | output              | torch.float32 |         | 1.1683788         | 1.9875835        | 1.4698026      | 0.0261856             | torch.Size([2, 512, 1])          |
| 2665    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(6)           | input_0             | torch.float32 |         | -0.5185297        | 3.2264094        | -0.0000000     | 0.4795651             | torch.Size([2, 512, 64])         |
| 2665    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(6)           | input_1             | torch.float32 |         | 1.1683788         | 1.9875835        | 1.4698026      | 0.0261856             | torch.Size([2, 512, 1])          |
| 2665    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.out_mul(6)           | output              | torch.float32 |         | -0.7461100        | 4.1616426        | -0.0000000     | 0.9999934             | torch.Size([2, 512, 64])         |
| 2666    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(6)      | input               | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 2666    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.weight_quant(6)      | output              | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 2667    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(6)        | input_0             | torch.float32 |         | -0.7461100        | 4.1616426        | -0.0000000     | 0.9999934             | torch.Size([2, 512, 64])         |
| 2667    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(6)        | input_1             | torch.float32 |         | 0.8006503         | 1.1495361        | 0.9818506      | 0.0032003             | torch.Size([64])                 |
| 2667    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.8.weight_mul(6)        | output              | torch.float32 |         | -0.8116230        | 4.4080744        | 0.0059305      | 0.9989552             | torch.Size([2, 512, 64])         |
| 2668    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(6)        | input               | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 2668    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.8.bias_quant(6)        | output              | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 2669    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(6)          | input_0             | torch.float32 |         | -0.8116230        | 4.4080744        | 0.0059305      | 0.9989552             | torch.Size([2, 512, 64])         |
| 2669    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(6)          | input_1             | torch.float32 |         | -0.0461140        | 0.1411197        | 0.0132828      | 0.0015701             | torch.Size([64])                 |
| 2669    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.8.bias_add(6)          | output              | torch.float32 |         | -0.8177376        | 4.4268212        | 0.0192133      | 0.9842129             | torch.Size([2, 512, 64])         |
| 2670    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(6)                   | input               | torch.float32 |         | -0.8177376        | 4.4268212        | 0.0192133      | 0.9842129             | torch.Size([2, 512, 64])         |
| 2670    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(6)                   | weight              | torch.float32 |         | -0.5701389        | 0.3477888        | 0.0006721      | 0.0085883             | torch.Size([64, 64])             |
| 2670    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(6)                   | bias                | torch.float32 |         | -0.1677032        | 0.1709885        | -0.0237130     | 0.0070098             | torch.Size([64])                 |
| 2670    | torch.nn.modules.linear.Linear                                                    | head.anchor_encoder.vel_fc.9(6)                   | output              | torch.float32 |         | -4.7942810        | 7.2171793        | -0.3994355     | 1.6258855             | torch.Size([2, 512, 64])         |
| 2671    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(6)                  | input               | torch.float32 |         | 0.0000000         | 7.2171793        | 0.2800938      | 0.5860955             | torch.Size([2, 512, 64])         |
| 2671    | torch.nn.modules.activation.ReLU                                                  | head.anchor_encoder.vel_fc.10(6)                  | output              | torch.float32 |         | 0.0000000         | 7.2171793        | 0.2800938      | 0.5860955             | torch.Size([2, 512, 64])         |
| 2672    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(6)  | input_0             | torch.float32 |         | 0.0000000         | 7.2171793        | 0.2800938      | 0.5860955             | torch.Size([2, 512, 64])         |
| 2672    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.input_mean.mean(6)  | output              | torch.float32 |         | 0.2030616         | 0.4069741        | 0.2800938      | 0.0026766             | torch.Size([2, 512, 1])          |
| 2673    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(6)              | input_0             | torch.float32 |         | 0.0000000         | 7.2171793        | 0.2800938      | 0.5860955             | torch.Size([2, 512, 64])         |
| 2673    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(6)              | input_1             | torch.float32 |         | 0.2030616         | 0.4069741        | 0.2800938      | 0.0026766             | torch.Size([2, 512, 1])          |
| 2673    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.anchor_encoder.vel_fc.11.sub(6)              | output              | torch.float32 |         | -0.4069741        | 7.0123343        | -0.0000000     | 0.5834215             | torch.Size([2, 512, 64])         |
| 2674    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(6)              | input_0             | torch.float32 |         | -0.4069741        | 7.0123343        | -0.0000000     | 0.5834215             | torch.Size([2, 512, 64])         |
| 2674    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(6)              | input_1             | torch.float32 |         | -0.4069741        | 7.0123343        | -0.0000000     | 0.5834215             | torch.Size([2, 512, 64])         |
| 2674    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.mul(6)              | output              | torch.float32 |         | 0.0000000         | 49.1728325       | 0.5834126      | 14.2654505            | torch.Size([2, 512, 64])         |
| 2675    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(6)    | input_0             | torch.float32 |         | 0.0000000         | 49.1728325       | 0.5834126      | 14.2654505            | torch.Size([2, 512, 64])         |
| 2675    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.anchor_encoder.vel_fc.11.var_mean.mean(6)    | output              | torch.float32 |         | 0.1979450         | 0.8249229        | 0.5834126      | 0.0347366             | torch.Size([2, 512, 1])          |
| 2676    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(6)            | input               | torch.float32 |         | 0.1979450         | 0.8249229        | 0.5834126      | 0.0347366             | torch.Size([2, 512, 1])          |
| 2676    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.anchor_encoder.vel_fc.11.rsqrt(6)            | output              | torch.float32 |         | 1.1010085         | 2.2475882        | 1.3820301      | 0.0927432             | torch.Size([2, 512, 1])          |
| 2677    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(6)          | input_0             | torch.float32 |         | -0.4069741        | 7.0123343        | -0.0000000     | 0.5834215             | torch.Size([2, 512, 64])         |
| 2677    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(6)          | input_1             | torch.float32 |         | 1.1010085         | 2.2475882        | 1.3820301      | 0.0927432             | torch.Size([2, 512, 1])          |
| 2677    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.out_mul(6)          | output              | torch.float32 |         | -0.7138116        | 7.7596498        | 0.0000000      | 0.9999952             | torch.Size([2, 512, 64])         |
| 2678    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(6)     | input               | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 2678    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.weight_quant(6)     | output              | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 2679    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(6)       | input_0             | torch.float32 |         | -0.7138116        | 7.7596498        | 0.0000000      | 0.9999952             | torch.Size([2, 512, 64])         |
| 2679    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(6)       | input_1             | torch.float32 |         | 0.7297163         | 1.2824999        | 1.0134131      | 0.0161719             | torch.Size([64])                 |
| 2679    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.anchor_encoder.vel_fc.11.weight_mul(6)       | output              | torch.float32 |         | -0.8906131        | 5.9848075        | -0.0204602     | 0.8063048             | torch.Size([2, 512, 64])         |
| 2680    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(6)       | input               | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 2680    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.anchor_encoder.vel_fc.11.bias_quant(6)       | output              | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 2681    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(6)         | input_0             | torch.float32 |         | -0.8906131        | 5.9848075        | -0.0204602     | 0.8063048             | torch.Size([2, 512, 64])         |
| 2681    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(6)         | input_1             | torch.float32 |         | -0.2385408        | 0.3192695        | 0.0900053      | 0.0129013             | torch.Size([64])                 |
| 2681    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.anchor_encoder.vel_fc.11.bias_add(6)         | output              | torch.float32 |         | -0.8632271        | 6.1983929        | 0.0695452      | 0.7475605             | torch.Size([2, 512, 64])         |
| 2682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_0             | torch.float32 |         | -0.8297419        | 7.3336802        | 0.0733797      | 0.8607930             | torch.Size([2, 512, 128])        |
| 2682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_1             | torch.float32 |         | -1.7223351        | 5.0518861        | 0.0226954      | 1.3956898             | torch.Size([2, 512, 32])         |
| 2682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_2             | torch.float32 |         | -0.9720487        | 4.8291078        | 0.0304891      | 0.8755097             | torch.Size([2, 512, 32])         |
| 2682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | input_3             | torch.float32 |         | -0.8632271        | 6.1983929        | 0.0695452      | 0.7475605             | torch.Size([2, 512, 64])         |
| 2682    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.anchor_encoder.cat(6)                        | output              | torch.float32 |         | -1.7223351        | 7.3336802        | 0.0607242      | 0.9015698             | torch.Size([2, 512, 256])        |
| 2683    | torch.nn.modules.linear.Linear                                                    | head.fc_before(10)                                | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 2683    | torch.nn.modules.linear.Linear                                                    | head.fc_before(10)                                | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 2683    | torch.nn.modules.linear.Linear                                                    | head.fc_before(10)                                | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 2684    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.35.query_cat                          | input_0             | torch.float32 |         | -4.3518085        | 5.2251782        | 0.0052197      | 0.8203376             | torch.Size([2, 512, 256])        |
| 2684    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.35.query_cat                          | input_1             | torch.float32 |         | -1.7223351        | 7.3336802        | 0.0607242      | 0.9015698             | torch.Size([2, 512, 256])        |
| 2684    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.35.query_cat                          | output              | torch.float32 |         | -4.3518085        | 7.3336802        | 0.0329719      | 0.8617223             | torch.Size([2, 512, 512])        |
| 2685    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.35.key_cat                            | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 256])        |
| 2685    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.35.key_cat                            | input_1             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0508909      | 0.8514420             | torch.Size([2, 256, 256])        |
| 2685    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.35.key_cat                            | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 2686    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -4.3518085        | 7.3336802        | 0.0329719      | 0.8617223             | torch.Size([2, 512, 512])        |
| 2686    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -4.3518085        | 7.3336802        | 0.0329719      | 0.8617223             | torch.Size([512, 2, 512])        |
| 2687    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([2, 256, 512])        |
| 2687    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2688    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([2, 256, 512])        |
| 2688    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2689    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | input_0             | torch.float32 |         | -4.3518085        | 7.3336802        | 0.0329719      | 0.8617223             | torch.Size([512, 2, 512])        |
| 2689    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | output              | torch.float32 |         | -4.3518085        | 7.3336802        | 0.0329719      | 0.8617223             | torch.Size([512, 2, 512])        |
| 2690    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | input_0             | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2690    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | output              | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2691    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2691    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2692    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.q_proj                        | input               | torch.float32 |         | -4.3518085        | 7.3336802        | 0.0329719      | 0.8617223             | torch.Size([512, 2, 512])        |
| 2692    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.q_proj                        | weight              | torch.float32 |         | -0.3235276        | 0.4215601        | -0.0001558     | 0.0035258             | torch.Size([512, 512])           |
| 2692    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.q_proj                        | bias                | torch.float32 |         | -0.0954634        | 0.0875029        | 0.0007627      | 0.0007613             | torch.Size([512])                |
| 2692    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.q_proj                        | output              | torch.float32 |         | -12.5752687       | 12.4284668       | 0.0211923      | 8.8569679             | torch.Size([512, 2, 512])        |
| 2693    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.k_proj                        | input               | torch.float32 |         | -1.0113386        | 5.3709445        | 0.0254454      | 0.4263669             | torch.Size([256, 2, 512])        |
| 2693    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.k_proj                        | weight              | torch.float32 |         | -0.5609900        | 0.5793647        | -0.0000159     | 0.0038509             | torch.Size([512, 512])           |
| 2693    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.k_proj                        | bias                | torch.float32 |         | -0.0054600        | 0.0029553        | -0.0000182     | 0.0000005             | torch.Size([512])                |
| 2693    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.k_proj                        | output              | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([256, 2, 512])        |
| 2694    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.v_proj                        | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([256, 2, 512])        |
| 2694    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.v_proj                        | weight              | torch.float32 |         | -0.3044824        | 0.3430385        | -0.0000593     | 0.0016902             | torch.Size([512, 512])           |
| 2694    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.v_proj                        | bias                | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006061             | torch.Size([512])                |
| 2694    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.v_proj                        | output              | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([256, 2, 512])        |
| 2695    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | input_0             | torch.float32 |         | -12.5752687       | 12.4284668       | 0.0211923      | 8.8569679             | torch.Size([512, 2, 512])        |
| 2695    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | output              | torch.float32 |         | -12.5752687       | 12.4284668       | 0.0211923      | 8.8569679             | torch.Size([512, 16, 64])        |
| 2696    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -12.5752687       | 12.4284668       | 0.0211923      | 8.8569679             | torch.Size([512, 16, 64])        |
| 2696    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -12.5752687       | 12.4284668       | 0.0211923      | 8.8569679             | torch.Size([16, 512, 64])        |
| 2697    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | input_0             | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([256, 2, 512])        |
| 2697    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | output              | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([256, 16, 64])        |
| 2698    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([256, 16, 64])        |
| 2698    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([16, 256, 64])        |
| 2699    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | input_0             | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([256, 2, 512])        |
| 2699    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | output              | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([256, 16, 64])        |
| 2700    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([256, 16, 64])        |
| 2700    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([16, 256, 64])        |
| 2701    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.35.attn.q_scale_mul                   | input_0             | torch.float32 |         | -12.5752687       | 12.4284668       | 0.0211923      | 8.8569679             | torch.Size([16, 512, 64])        |
| 2701    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.35.attn.q_scale_mul                   | output              | torch.float32 |         | -1.5719086        | 1.5535583        | 0.0026490      | 0.1383901             | torch.Size([16, 512, 64])        |
| 2702    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([16, 256, 64])        |
| 2702    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([16, 64, 256])        |
| 2703    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.35.attn.matmul                        | input_0             | torch.float32 |         | -1.5719086        | 1.5535583        | 0.0026490      | 0.1383901             | torch.Size([16, 512, 64])        |
| 2703    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.35.attn.matmul                        | input_1             | torch.float32 |         | -5.9738407        | 6.9967513        | -0.0139036     | 4.4791522             | torch.Size([16, 64, 256])        |
| 2703    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.35.attn.matmul                        | output              | torch.float32 |         | -99.7187424       | 59.8792801       | -4.4275541     | 360.5614929           | torch.Size([16, 512, 256])       |
| 2704    | torch.Tensor.max                                                                  | head.layers.35.attn.softmax                       | input               | torch.float32 |         | -99.7187424       | 59.8792801       | -4.4275541     | 360.5614929           | torch.Size([16, 512, 256])       |
| 2704    | torch.Tensor.max                                                                  | head.layers.35.attn.softmax                       | output_0            | torch.float32 |         | -99.7187424       | 59.8792801       | -4.4275541     | 360.6052856           | torch.Size([16, 512, 1])         |
| 2704    | torch.Tensor.max                                                                  | head.layers.35.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 1])         |
| 2705    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.35.attn.softmax.sub                   | input_0             | torch.float32 |         | -99.7187424       | 59.8792801       | -4.4275541     | 360.5614929           | torch.Size([16, 512, 256])       |
| 2705    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.35.attn.softmax.sub                   | input_1             | torch.float32 |         | -99.7187424       | 59.8792801       | -4.4275541     | 360.6052856           | torch.Size([16, 512, 1])         |
| 2705    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.35.attn.softmax.sub                   | output              | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2706    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.35.attn.softmax.exp                   | input               | torch.float32 |         | 0.0000000         | 0.0000000        | 0.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2706    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.35.attn.softmax.exp                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2707    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.35.attn.softmax.sum                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2707    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.35.attn.softmax.sum                   | output              | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 2708    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.35.attn.softmax.reciprocal            | input               | torch.float32 |         | 256.0000000       | 256.0000000      | 256.0000000    | 0.0000000             | torch.Size([16, 512, 1])         |
| 2708    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.35.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 2709    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.35.attn.softmax.mul                   | input_0             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2709    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.35.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 1])         |
| 2709    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.35.attn.softmax.mul                   | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2710    | torch.nn.modules.dropout.Dropout                                                  | head.layers.35.attn.attention_drop                | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2710    | torch.nn.modules.dropout.Dropout                                                  | head.layers.35.attn.attention_drop                | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2711    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.35.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2711    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.35.attn.attn_matmul                   | input_1             | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([16, 256, 64])        |
| 2711    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.35.attn.attn_matmul                   | output              | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([16, 512, 64])        |
| 2712    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([16, 512, 64])        |
| 2712    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([512, 16, 64])        |
| 2713    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | input_0             | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([512, 16, 64])        |
| 2713    | torch.Tensor.reshape                                                              | head.layers.35.attn                               | output              | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([512, 2, 512])        |
| 2714    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.out_proj                      | input               | torch.float32 |         | -0.0821221        | 0.0959587        | 0.0009844      | 0.0006049             | torch.Size([512, 2, 512])        |
| 2714    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.out_proj                      | weight              | torch.float32 |         | -0.2512448        | 0.2980582        | -0.0000690     | 0.0024223             | torch.Size([512, 512])           |
| 2714    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.out_proj                      | bias                | torch.float32 |         | -0.3283637        | 0.3022734        | 0.0070495      | 0.0084595             | torch.Size([512])                |
| 2714    | torch.nn.modules.linear.Linear                                                    | head.layers.35.attn.out_proj                      | output              | torch.float32 |         | -0.4557147        | 0.3130397        | 0.0110905      | 0.0154120             | torch.Size([512, 2, 512])        |
| 2715    | torch.Tensor.view                                                                 | head.layers.35.attn                               | input_0             | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([16, 512, 256])       |
| 2715    | torch.Tensor.view                                                                 | head.layers.35.attn                               | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 2716    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.35.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 8, 512, 256])     |
| 2716    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.35.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0039062         | 0.0039062        | 0.0039062      | 0.0000000             | torch.Size([2, 512, 256])        |
| 2717    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | input_0             | torch.float32 |         | -0.4557147        | 0.3130397        | 0.0110905      | 0.0154120             | torch.Size([512, 2, 512])        |
| 2717    | torch.Tensor.transpose                                                            | head.layers.35.attn                               | output              | torch.float32 |         | -0.4557147        | 0.3130397        | 0.0110905      | 0.0154120             | torch.Size([2, 512, 512])        |
| 2718    | torch.nn.modules.dropout.Dropout                                                  | head.layers.35.dropout                            | input               | torch.float32 |         | -0.4557147        | 0.3130397        | 0.0110905      | 0.0154120             | torch.Size([2, 512, 512])        |
| 2718    | torch.nn.modules.dropout.Dropout                                                  | head.layers.35.dropout                            | output              | torch.float32 |         | -0.4557147        | 0.3130397        | 0.0110905      | 0.0154120             | torch.Size([2, 512, 512])        |
| 2719    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.35.add                                | input_0             | torch.float32 |         | -4.3518085        | 7.3336802        | 0.0329719      | 0.8617223             | torch.Size([2, 512, 512])        |
| 2719    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.35.add                                | input_1             | torch.float32 |         | -0.4557147        | 0.3130397        | 0.0110905      | 0.0154120             | torch.Size([2, 512, 512])        |
| 2719    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.35.add                                | output              | torch.float32 |         | -4.4482341        | 7.0790982        | 0.0440625      | 0.8271376             | torch.Size([2, 512, 512])        |
| 2720    | torch.nn.modules.linear.Linear                                                    | head.fc_after(10)                                 | input               | torch.float32 |         | -4.4482341        | 7.0790982        | 0.0440625      | 0.8271376             | torch.Size([2, 512, 512])        |
| 2720    | torch.nn.modules.linear.Linear                                                    | head.fc_after(10)                                 | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 2720    | torch.nn.modules.linear.Linear                                                    | head.fc_after(10)                                 | output              | torch.float32 |         | -6.4843450        | 6.0561466        | 0.0293336      | 1.0062978             | torch.Size([2, 512, 256])        |
| 2721    | torch.nn.modules.linear.Linear                                                    | head.fc_before(11)                                | input               | torch.float32 |         | -6.4843450        | 6.0561466        | 0.0293336      | 1.0062978             | torch.Size([2, 512, 256])        |
| 2721    | torch.nn.modules.linear.Linear                                                    | head.fc_before(11)                                | weight              | torch.float32 |         | -0.1090298        | 0.1089591        | -0.0000406     | 0.0005908             | torch.Size([512, 256])           |
| 2721    | torch.nn.modules.linear.Linear                                                    | head.fc_before(11)                                | output              | torch.float32 |         | -3.5504229        | 3.5004282        | 0.0029312      | 0.0655703             | torch.Size([2, 512, 512])        |
| 2722    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.36.query_cat                          | input_0             | torch.float32 |         | -6.4843450        | 6.0561466        | 0.0293336      | 1.0062978             | torch.Size([2, 512, 256])        |
| 2722    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.36.query_cat                          | input_1             | torch.float32 |         | -1.7223351        | 7.3336802        | 0.0607242      | 0.9015698             | torch.Size([2, 512, 256])        |
| 2722    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.36.query_cat                          | output              | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([2, 512, 512])        |
| 2723    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.36.key_cat                            | input_0             | torch.float32 |         | -6.4843450        | 6.0561466        | 0.0293336      | 1.0062978             | torch.Size([2, 512, 256])        |
| 2723    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.36.key_cat                            | input_1             | torch.float32 |         | -1.7223351        | 7.3336802        | 0.0607242      | 0.9015698             | torch.Size([2, 512, 256])        |
| 2723    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.36.key_cat                            | output              | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([2, 512, 512])        |
| 2724    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([2, 512, 512])        |
| 2724    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2725    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([2, 512, 512])        |
| 2725    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2726    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -3.5504229        | 3.5004282        | 0.0029312      | 0.0655703             | torch.Size([2, 512, 512])        |
| 2726    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -3.5504229        | 3.5004282        | 0.0029312      | 0.0655703             | torch.Size([512, 2, 512])        |
| 2727    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | input_0             | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2727    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | output              | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2728    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | input_0             | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2728    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | output              | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2729    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | input_0             | torch.float32 |         | -3.5504229        | 3.5004282        | 0.0029312      | 0.0655703             | torch.Size([512, 2, 512])        |
| 2729    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | output              | torch.float32 |         | -3.5504229        | 3.5004282        | 0.0029312      | 0.0655703             | torch.Size([512, 2, 512])        |
| 2730    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.q_proj                        | input               | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2730    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.q_proj                        | weight              | torch.float32 |         | -0.3146838        | 0.3318836        | 0.0000977      | 0.0028868             | torch.Size([512, 512])           |
| 2730    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.q_proj                        | bias                | torch.float32 |         | -0.1396752        | 0.1003755        | -0.0017663     | 0.0008599             | torch.Size([512])                |
| 2730    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.q_proj                        | output              | torch.float32 |         | -10.3117218       | 10.8579397       | -0.0500439     | 4.5823011             | torch.Size([512, 2, 512])        |
| 2731    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.k_proj                        | input               | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([512, 2, 512])        |
| 2731    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.k_proj                        | weight              | torch.float32 |         | -0.9564776        | 0.9354519        | -0.0000881     | 0.0038703             | torch.Size([512, 512])           |
| 2731    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.k_proj                        | bias                | torch.float32 |         | -0.1178043        | 0.1006244        | -0.0005137     | 0.0002969             | torch.Size([512])                |
| 2731    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.k_proj                        | output              | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([512, 2, 512])        |
| 2732    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.v_proj                        | input               | torch.float32 |         | -3.5504229        | 3.5004282        | 0.0029312      | 0.0655703             | torch.Size([512, 2, 512])        |
| 2732    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.v_proj                        | weight              | torch.float32 |         | -0.2458883        | 0.2633308        | -0.0000698     | 0.0018804             | torch.Size([512, 512])           |
| 2732    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.v_proj                        | bias                | torch.float32 |         | -0.1800991        | 0.2041788        | 0.0003858      | 0.0020850             | torch.Size([512])                |
| 2732    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.v_proj                        | output              | torch.float32 |         | -4.6249700        | 3.7010930        | 0.0019094      | 0.1618744             | torch.Size([512, 2, 512])        |
| 2733    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | input_0             | torch.float32 |         | -10.3117218       | 10.8579397       | -0.0500439     | 4.5823011             | torch.Size([512, 2, 512])        |
| 2733    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | output              | torch.float32 |         | -10.3117218       | 10.8579397       | -0.0500439     | 4.5823011             | torch.Size([512, 16, 64])        |
| 2734    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -10.3117218       | 10.8579397       | -0.0500439     | 4.5823011             | torch.Size([512, 16, 64])        |
| 2734    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -10.3117218       | 10.8579397       | -0.0500439     | 4.5823011             | torch.Size([16, 512, 64])        |
| 2735    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | input_0             | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([512, 2, 512])        |
| 2735    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | output              | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([512, 16, 64])        |
| 2736    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([512, 16, 64])        |
| 2736    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([16, 512, 64])        |
| 2737    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | input_0             | torch.float32 |         | -4.6249700        | 3.7010930        | 0.0019094      | 0.1618744             | torch.Size([512, 2, 512])        |
| 2737    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | output              | torch.float32 |         | -4.6249700        | 3.7010930        | 0.0019094      | 0.1618744             | torch.Size([512, 16, 64])        |
| 2738    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -4.6249700        | 3.7010930        | 0.0019094      | 0.1618744             | torch.Size([512, 16, 64])        |
| 2738    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -4.6249700        | 3.7010930        | 0.0019094      | 0.1618744             | torch.Size([16, 512, 64])        |
| 2739    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.36.attn.q_scale_mul                   | input_0             | torch.float32 |         | -10.3117218       | 10.8579397       | -0.0500439     | 4.5823011             | torch.Size([16, 512, 64])        |
| 2739    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul_scalar | head.layers.36.attn.q_scale_mul                   | output              | torch.float32 |         | -1.2889652        | 1.3572425        | -0.0062555     | 0.0715985             | torch.Size([16, 512, 64])        |
| 2740    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([16, 512, 64])        |
| 2740    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([16, 64, 512])        |
| 2741    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.36.attn.matmul                        | input_0             | torch.float32 |         | -1.2889652        | 1.3572425        | -0.0062555     | 0.0715985             | torch.Size([16, 512, 64])        |
| 2741    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.36.attn.matmul                        | input_1             | torch.float32 |         | -16.5386505       | 17.9928055       | 0.0148321      | 7.6419821             | torch.Size([16, 64, 512])        |
| 2741    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.36.attn.matmul                        | output              | torch.float32 |         | -83.7184067       | 153.8467102      | -2.3886709     | 393.2697754           | torch.Size([16, 512, 512])       |
| 2742    | torch.Tensor.max                                                                  | head.layers.36.attn.softmax                       | input               | torch.float32 |         | -83.7184067       | 153.8467102      | -2.3886709     | 393.2697754           | torch.Size([16, 512, 512])       |
| 2742    | torch.Tensor.max                                                                  | head.layers.36.attn.softmax                       | output_0            | torch.float32 |         | -0.0480392        | 153.8467102      | 31.9635143     | 661.0639038           | torch.Size([16, 512, 1])         |
| 2742    | torch.Tensor.max                                                                  | head.layers.36.attn.softmax                       | output_1            | torch.int64   |         | 0.0000000         | 511.0000000      | 306.7520752    | 13270.6181641         | torch.Size([16, 512, 1])         |
| 2743    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.36.attn.softmax.sub                   | input_0             | torch.float32 |         | -83.7184067       | 153.8467102      | -2.3886709     | 393.2697754           | torch.Size([16, 512, 512])       |
| 2743    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.36.attn.softmax.sub                   | input_1             | torch.float32 |         | -0.0480392        | 153.8467102      | 31.9635143     | 661.0639038           | torch.Size([16, 512, 1])         |
| 2743    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.36.attn.softmax.sub                   | output              | torch.float32 |         | -231.1035461      | 0.0000000        | -34.3521843    | 1095.0845947          | torch.Size([16, 512, 512])       |
| 2744    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.36.attn.softmax.exp                   | input               | torch.float32 |         | -231.1035461      | 0.0000000        | -34.3521843    | 1095.0845947          | torch.Size([16, 512, 512])       |
| 2744    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.36.attn.softmax.exp                   | output              | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0126492      | 0.0061681             | torch.Size([16, 512, 512])       |
| 2745    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.36.attn.softmax.sum                   | input               | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0126492      | 0.0061681             | torch.Size([16, 512, 512])       |
| 2745    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.36.attn.softmax.sum                   | output              | torch.float32 |         | 1.0000273         | 136.5774841      | 6.4763956      | 118.3439560           | torch.Size([16, 512, 1])         |
| 2746    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.36.attn.softmax.reciprocal            | input               | torch.float32 |         | 1.0000273         | 136.5774841      | 6.4763956      | 118.3439560           | torch.Size([16, 512, 1])         |
| 2746    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.36.attn.softmax.reciprocal            | output              | torch.float32 |         | 0.0073219         | 0.9999727        | 0.3283707      | 0.0553623             | torch.Size([16, 512, 1])         |
| 2747    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.36.attn.softmax.mul                   | input_0             | torch.float32 |         | 0.0000000         | 1.0000000        | 0.0126492      | 0.0061681             | torch.Size([16, 512, 512])       |
| 2747    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.36.attn.softmax.mul                   | input_1             | torch.float32 |         | 0.0073219         | 0.9999727        | 0.3283707      | 0.0553623             | torch.Size([16, 512, 1])         |
| 2747    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.36.attn.softmax.mul                   | output              | torch.float32 |         | 0.0000000         | 0.9999727        | 0.0019531      | 0.0004234             | torch.Size([16, 512, 512])       |
| 2748    | torch.nn.modules.dropout.Dropout                                                  | head.layers.36.attn.attention_drop                | input               | torch.float32 |         | 0.0000000         | 0.9999727        | 0.0019531      | 0.0004234             | torch.Size([16, 512, 512])       |
| 2748    | torch.nn.modules.dropout.Dropout                                                  | head.layers.36.attn.attention_drop                | output              | torch.float32 |         | 0.0000000         | 0.9999727        | 0.0019531      | 0.0004234             | torch.Size([16, 512, 512])       |
| 2749    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.36.attn.attn_matmul                   | input_0             | torch.float32 |         | 0.0000000         | 0.9999727        | 0.0019531      | 0.0004234             | torch.Size([16, 512, 512])       |
| 2749    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.36.attn.attn_matmul                   | input_1             | torch.float32 |         | -4.6249700        | 3.7010930        | 0.0019094      | 0.1618744             | torch.Size([16, 512, 64])        |
| 2749    | horizon_plugin_pytorch.nn.matmul.Matmul                                           | head.layers.36.attn.attn_matmul                   | output              | torch.float32 |         | -3.3160412        | 2.4165351        | -0.0009395     | 0.1245334             | torch.Size([16, 512, 64])        |
| 2750    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -3.3160412        | 2.4165351        | -0.0009395     | 0.1245334             | torch.Size([16, 512, 64])        |
| 2750    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -3.3160412        | 2.4165351        | -0.0009395     | 0.1245334             | torch.Size([512, 16, 64])        |
| 2751    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | input_0             | torch.float32 |         | -3.3160412        | 2.4165351        | -0.0009395     | 0.1245334             | torch.Size([512, 16, 64])        |
| 2751    | torch.Tensor.reshape                                                              | head.layers.36.attn                               | output              | torch.float32 |         | -3.3160412        | 2.4165351        | -0.0009395     | 0.1245334             | torch.Size([512, 2, 512])        |
| 2752    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.out_proj                      | input               | torch.float32 |         | -3.3160412        | 2.4165351        | -0.0009395     | 0.1245334             | torch.Size([512, 2, 512])        |
| 2752    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.out_proj                      | weight              | torch.float32 |         | -0.2637568        | 0.2630204        | 0.0000084      | 0.0029881             | torch.Size([512, 512])           |
| 2752    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.out_proj                      | bias                | torch.float32 |         | -0.3658920        | 0.3991215        | 0.0013332      | 0.0151695             | torch.Size([512])                |
| 2752    | torch.nn.modules.linear.Linear                                                    | head.layers.36.attn.out_proj                      | output              | torch.float32 |         | -3.5700912        | 3.7292984        | -0.0121681     | 0.6175176             | torch.Size([512, 2, 512])        |
| 2753    | torch.Tensor.view                                                                 | head.layers.36.attn                               | input_0             | torch.float32 |         | 0.0000000         | 0.9999727        | 0.0019531      | 0.0004234             | torch.Size([16, 512, 512])       |
| 2753    | torch.Tensor.view                                                                 | head.layers.36.attn                               | output              | torch.float32 |         | 0.0000000         | 0.9999727        | 0.0019531      | 0.0004234             | torch.Size([2, 8, 512, 512])     |
| 2754    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.36.attn.attn_weights_mean             | input               | torch.float32 |         | 0.0000000         | 0.9999727        | 0.0019531      | 0.0004234             | torch.Size([2, 8, 512, 512])     |
| 2754    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.36.attn.attn_weights_mean             | output              | torch.float32 |         | 0.0000000         | 0.2288126        | 0.0019531      | 0.0000617             | torch.Size([2, 512, 512])        |
| 2755    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | input_0             | torch.float32 |         | -3.5700912        | 3.7292984        | -0.0121681     | 0.6175176             | torch.Size([512, 2, 512])        |
| 2755    | torch.Tensor.transpose                                                            | head.layers.36.attn                               | output              | torch.float32 |         | -3.5700912        | 3.7292984        | -0.0121681     | 0.6175176             | torch.Size([2, 512, 512])        |
| 2756    | torch.nn.modules.dropout.Dropout                                                  | head.layers.36.dropout                            | input               | torch.float32 |         | -3.5700912        | 3.7292984        | -0.0121681     | 0.6175176             | torch.Size([2, 512, 512])        |
| 2756    | torch.nn.modules.dropout.Dropout                                                  | head.layers.36.dropout                            | output              | torch.float32 |         | -3.5700912        | 3.7292984        | -0.0121681     | 0.6175176             | torch.Size([2, 512, 512])        |
| 2757    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.36.add                                | input_0             | torch.float32 |         | -6.4843450        | 7.3336802        | 0.0450289      | 0.9541784             | torch.Size([2, 512, 512])        |
| 2757    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.36.add                                | input_1             | torch.float32 |         | -3.5700912        | 3.7292984        | -0.0121681     | 0.6175176             | torch.Size([2, 512, 512])        |
| 2757    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.36.add                                | output              | torch.float32 |         | -7.5501776        | 8.4718161        | 0.0328608      | 1.6085800             | torch.Size([2, 512, 512])        |
| 2758    | torch.nn.modules.linear.Linear                                                    | head.fc_after(11)                                 | input               | torch.float32 |         | -7.5501776        | 8.4718161        | 0.0328608      | 1.6085800             | torch.Size([2, 512, 512])        |
| 2758    | torch.nn.modules.linear.Linear                                                    | head.fc_after(11)                                 | weight              | torch.float32 |         | -0.3694984        | 0.3971221        | -0.0001689     | 0.0017596             | torch.Size([256, 512])           |
| 2758    | torch.nn.modules.linear.Linear                                                    | head.fc_after(11)                                 | output              | torch.float32 |         | -40.9236374       | 31.0698776       | -0.0178662     | 15.5767784            | torch.Size([2, 512, 256])        |
| 2759    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.37.input_mean.mean                    | input_0             | torch.float32 |         | -40.9236374       | 31.0698776       | -0.0178662     | 15.5767784            | torch.Size([2, 512, 256])        |
| 2759    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.37.input_mean.mean                    | output              | torch.float32 |         | -0.1363428        | 0.2828000        | -0.0178662     | 0.0048766             | torch.Size([2, 512, 1])          |
| 2760    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.37.sub                                | input_0             | torch.float32 |         | -40.9236374       | 31.0698776       | -0.0178662     | 15.5767784            | torch.Size([2, 512, 256])        |
| 2760    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.37.sub                                | input_1             | torch.float32 |         | -0.1363428        | 0.2828000        | -0.0178662     | 0.0048766             | torch.Size([2, 512, 1])          |
| 2760    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.37.sub                                | output              | torch.float32 |         | -41.1477432       | 30.9756031       | 0.0000000      | 15.5719070            | torch.Size([2, 512, 256])        |
| 2761    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.mul                                | input_0             | torch.float32 |         | -41.1477432       | 30.9756031       | 0.0000000      | 15.5719070            | torch.Size([2, 512, 256])        |
| 2761    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.mul                                | input_1             | torch.float32 |         | -41.1477432       | 30.9756031       | 0.0000000      | 15.5719070            | torch.Size([2, 512, 256])        |
| 2761    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.mul                                | output              | torch.float32 |         | 0.0000000         | 1693.1367188     | 15.5718470     | 6811.0712891          | torch.Size([2, 512, 256])        |
| 2762    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.37.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 1693.1367188     | 15.5718470     | 6811.0712891          | torch.Size([2, 512, 256])        |
| 2762    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.37.var_mean.mean                      | output              | torch.float32 |         | 7.1029849         | 27.3616447       | 15.5718479     | 19.9929485            | torch.Size([2, 512, 1])          |
| 2763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.37.rsqrt                              | input               | torch.float32 |         | 7.1029849         | 27.3616447       | 15.5718479     | 19.9929485            | torch.Size([2, 512, 1])          |
| 2763    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.37.rsqrt                              | output              | torch.float32 |         | 0.1911740         | 0.3752142        | 0.2612075      | 0.0013712             | torch.Size([2, 512, 1])          |
| 2764    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.out_mul                            | input_0             | torch.float32 |         | -41.1477432       | 30.9756031       | 0.0000000      | 15.5719070            | torch.Size([2, 512, 256])        |
| 2764    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.out_mul                            | input_1             | torch.float32 |         | 0.1911740         | 0.3752142        | 0.2612075      | 0.0013712             | torch.Size([2, 512, 1])          |
| 2764    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.out_mul                            | output              | torch.float32 |         | -8.5145369        | 6.1699991        | 0.0000000      | 1.0000031             | torch.Size([2, 512, 256])        |
| 2765    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.37.weight_quant                       | input               | torch.float32 |         | 0.7167655         | 1.1553942        | 0.9289461      | 0.0046820             | torch.Size([256])                |
| 2765    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.37.weight_quant                       | output              | torch.float32 |         | 0.7167655         | 1.1553942        | 0.9289461      | 0.0046820             | torch.Size([256])                |
| 2766    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.weight_mul                         | input_0             | torch.float32 |         | -8.5145369        | 6.1699991        | 0.0000000      | 1.0000031             | torch.Size([2, 512, 256])        |
| 2766    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.weight_mul                         | input_1             | torch.float32 |         | 0.7167655         | 1.1553942        | 0.9289461      | 0.0046820             | torch.Size([256])                |
| 2766    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.37.weight_mul                         | output              | torch.float32 |         | -6.2472930        | 4.6114011        | 0.0026105      | 0.6808797             | torch.Size([2, 512, 256])        |
| 2767    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.37.bias_quant                         | input               | torch.float32 |         | -0.2403839        | 0.2585355        | 0.0083271      | 0.0031905             | torch.Size([256])                |
| 2767    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.37.bias_quant                         | output              | torch.float32 |         | -0.2403839        | 0.2585355        | 0.0083271      | 0.0031905             | torch.Size([256])                |
| 2768    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.37.bias_add                           | input_0             | torch.float32 |         | -6.2472930        | 4.6114011        | 0.0026105      | 0.6808797             | torch.Size([2, 512, 256])        |
| 2768    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.37.bias_add                           | input_1             | torch.float32 |         | -0.2403839        | 0.2585355        | 0.0083271      | 0.0031905             | torch.Size([256])                |
| 2768    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.37.bias_add                           | output              | torch.float32 |         | -5.9887576        | 4.3753686        | 0.0109376      | 0.6464748             | torch.Size([2, 512, 256])        |
| 2769    | torch.nn.modules.linear.Linear                                                    | head.layers.38.kps_generator.offset               | input               | torch.float32 |         | -5.9887576        | 4.3753686        | 0.0109376      | 0.6464748             | torch.Size([2, 512, 256])        |
| 2769    | torch.nn.modules.linear.Linear                                                    | head.layers.38.kps_generator.offset               | weight              | torch.float32 |         | -0.2949824        | 0.2879395        | -0.0002231     | 0.0054715             | torch.Size([24, 256])            |
| 2769    | torch.nn.modules.linear.Linear                                                    | head.layers.38.kps_generator.offset               | bias                | torch.float32 |         | -0.1117399        | 0.0869147        | -0.0169646     | 0.0027590             | torch.Size([24])                 |
| 2769    | torch.nn.modules.linear.Linear                                                    | head.layers.38.kps_generator.offset               | output              | torch.float32 |         | -7.7515392        | 7.4948549        | -0.3259161     | 2.5189850             | torch.Size([2, 512, 24])         |
| 2770    | torch.Tensor.view                                                                 | head.layers.38.kps_generator                      | input_0             | torch.float32 |         | -7.7515392        | 7.4948549        | -0.3259161     | 2.5189850             | torch.Size([2, 512, 24])         |
| 2770    | torch.Tensor.view                                                                 | head.layers.38.kps_generator                      | output              | torch.float32 |         | -7.7515392        | 7.4948549        | -0.3259161     | 2.5189850             | torch.Size([2, 512, 8, 3])       |
| 2771    | torch.Tensor.__getitem__                                                          | head.layers.38.kps_generator                      | input_0             | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2771    | torch.Tensor.__getitem__                                                          | head.layers.38.kps_generator                      | output              | torch.float32 |         | -53.3880920       | 53.3906403       | 1.0072179      | 290.4691772           | torch.Size([2, 512, 1, 3])       |
| 2772    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.kps_generator.keypoints_add        | input_0             | torch.float32 |         | -7.7515392        | 7.4948549        | -0.3259161     | 2.5189850             | torch.Size([2, 512, 8, 3])       |
| 2772    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.kps_generator.keypoints_add        | input_1             | torch.float32 |         | -53.3880920       | 53.3906403       | 1.0072179      | 290.4691772           | torch.Size([2, 512, 1, 3])       |
| 2772    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.kps_generator.keypoints_add        | output              | torch.float32 |         | -60.0323524       | 58.7404404       | 0.6813018      | 293.5900879           | torch.Size([2, 512, 8, 3])       |
| 2773    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.weight_add                         | input_0             | torch.float32 |         | -5.9887576        | 4.3753686        | 0.0109376      | 0.6464748             | torch.Size([2, 512, 256])        |
| 2773    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.weight_add                         | input_1             | torch.float32 |         | -1.7223351        | 7.3336802        | 0.0607242      | 0.9015698             | torch.Size([2, 512, 256])        |
| 2773    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.weight_add                         | output              | torch.float32 |         | -6.2914047        | 7.7483292        | 0.0716618      | 1.4886400             | torch.Size([2, 512, 256])        |
| 2774    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 2774    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 2775    | torch.Tensor.reshape                                                              | head.layers.38                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 3, 4])         |
| 2775    | torch.Tensor.reshape                                                              | head.layers.38                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 2776    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.0                   | input               | torch.float32 |         | -4.3784013        | 1.5829016        | -0.4411914     | 1.6379654             | torch.Size([2, 6, 12])           |
| 2776    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.0                   | weight              | torch.float32 |         | -0.5837476        | 0.6199124        | 0.0053515      | 0.0138439             | torch.Size([256, 12])            |
| 2776    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.0                   | bias                | torch.float32 |         | -0.3124255        | 0.3618607        | 0.0002249      | 0.0292400             | torch.Size([256])                |
| 2776    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.0                   | output              | torch.float32 |         | -1.2553160        | 1.0536754        | -0.1143259     | 0.1711709             | torch.Size([2, 6, 256])          |
| 2777    | torch.nn.modules.activation.ReLU                                                  | head.layers.38.camera_encoder.1                   | input               | torch.float32 |         | 0.0000000         | 1.0536754        | 0.1234016      | 0.0409576             | torch.Size([2, 6, 256])          |
| 2777    | torch.nn.modules.activation.ReLU                                                  | head.layers.38.camera_encoder.1                   | output              | torch.float32 |         | 0.0000000         | 1.0536754        | 0.1234016      | 0.0409576             | torch.Size([2, 6, 256])          |
| 2778    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 1.0536754        | 0.1234016      | 0.0409576             | torch.Size([2, 6, 256])          |
| 2778    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.input_mean.mean   | output              | torch.float32 |         | 0.1089630         | 0.1362847        | 0.1234016      | 0.0000880             | torch.Size([2, 6, 1])            |
| 2779    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.2.sub               | input_0             | torch.float32 |         | 0.0000000         | 1.0536754        | 0.1234016      | 0.0409576             | torch.Size([2, 6, 256])          |
| 2779    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.2.sub               | input_1             | torch.float32 |         | 0.1089630         | 0.1362847        | 0.1234016      | 0.0000880             | torch.Size([2, 6, 1])            |
| 2779    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.2.sub               | output              | torch.float32 |         | -0.1362847        | 0.9240283        | -0.0000000     | 0.0408768             | torch.Size([2, 6, 256])          |
| 2780    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.mul               | input_0             | torch.float32 |         | -0.1362847        | 0.9240283        | -0.0000000     | 0.0408768             | torch.Size([2, 6, 256])          |
| 2780    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.mul               | input_1             | torch.float32 |         | -0.1362847        | 0.9240283        | -0.0000000     | 0.0408768             | torch.Size([2, 6, 256])          |
| 2780    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.mul               | output              | torch.float32 |         | 0.0000000         | 0.8538283        | 0.0408635      | 0.0070822             | torch.Size([2, 6, 256])          |
| 2781    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.var_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 0.8538283        | 0.0408635      | 0.0070822             | torch.Size([2, 6, 256])          |
| 2781    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.2.var_mean.mean     | output              | torch.float32 |         | 0.0283811         | 0.0500048        | 0.0408635      | 0.0000444             | torch.Size([2, 6, 1])            |
| 2782    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.38.camera_encoder.2.rsqrt             | input               | torch.float32 |         | 0.0283811         | 0.0500048        | 0.0408635      | 0.0000444             | torch.Size([2, 6, 1])            |
| 2782    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.38.camera_encoder.2.rsqrt             | output              | torch.float32 |         | 4.4714737         | 5.9348421        | 4.9979630      | 0.2100988             | torch.Size([2, 6, 1])            |
| 2783    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.out_mul           | input_0             | torch.float32 |         | -0.1362847        | 0.9240283        | -0.0000000     | 0.0408768             | torch.Size([2, 6, 256])          |
| 2783    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.out_mul           | input_1             | torch.float32 |         | 4.4714737         | 5.9348421        | 4.9979630      | 0.2100988             | torch.Size([2, 6, 1])            |
| 2783    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.out_mul           | output              | torch.float32 |         | -0.6466782        | 4.4512854        | -0.0000000     | 1.0000739             | torch.Size([2, 6, 256])          |
| 2784    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.2.weight_quant      | input               | torch.float32 |         | 0.6364256         | 1.2354475        | 0.9619384      | 0.0091793             | torch.Size([256])                |
| 2784    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.2.weight_quant      | output              | torch.float32 |         | 0.6364256         | 1.2354475        | 0.9619384      | 0.0091793             | torch.Size([256])                |
| 2785    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.weight_mul        | input_0             | torch.float32 |         | -0.6466782        | 4.4512854        | -0.0000000     | 1.0000739             | torch.Size([2, 6, 256])          |
| 2785    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.weight_mul        | input_1             | torch.float32 |         | 0.6364256         | 1.2354475        | 0.9619384      | 0.0091793             | torch.Size([256])                |
| 2785    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.2.weight_mul        | output              | torch.float32 |         | -0.7989370        | 4.8744335        | 0.0230267      | 1.0099785             | torch.Size([2, 6, 256])          |
| 2786    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.2.bias_quant        | input               | torch.float32 |         | -0.0854455        | 0.2577538        | 0.0279319      | 0.0030540             | torch.Size([256])                |
| 2786    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.2.bias_quant        | output              | torch.float32 |         | -0.0854455        | 0.2577538        | 0.0279319      | 0.0030540             | torch.Size([256])                |
| 2787    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.2.bias_add          | input_0             | torch.float32 |         | -0.7989370        | 4.8744335        | 0.0230267      | 1.0099785             | torch.Size([2, 6, 256])          |
| 2787    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.2.bias_add          | input_1             | torch.float32 |         | -0.0854455        | 0.2577538        | 0.0279319      | 0.0030540             | torch.Size([256])                |
| 2787    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.2.bias_add          | output              | torch.float32 |         | -0.8296137        | 4.8705287        | 0.0509585      | 0.9892611             | torch.Size([2, 6, 256])          |
| 2788    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.3                   | input               | torch.float32 |         | -0.8296137        | 4.8705287        | 0.0509585      | 0.9892611             | torch.Size([2, 6, 256])          |
| 2788    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.3                   | weight              | torch.float32 |         | -0.4502119        | 0.5281727        | 0.0017226      | 0.0051280             | torch.Size([256, 256])           |
| 2788    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.3                   | bias                | torch.float32 |         | -0.0939403        | 0.2747428        | -0.0087818     | 0.0019428             | torch.Size([256])                |
| 2788    | torch.nn.modules.linear.Linear                                                    | head.layers.38.camera_encoder.3                   | output              | torch.float32 |         | -9.8019505        | 41.1847763       | -0.9721469     | 16.5397968            | torch.Size([2, 6, 256])          |
| 2789    | torch.nn.modules.activation.ReLU                                                  | head.layers.38.camera_encoder.4                   | input               | torch.float32 |         | 0.0000000         | 41.1847763       | 0.7857045      | 12.0341806            | torch.Size([2, 6, 256])          |
| 2789    | torch.nn.modules.activation.ReLU                                                  | head.layers.38.camera_encoder.4                   | output              | torch.float32 |         | 0.0000000         | 41.1847763       | 0.7857045      | 12.0341806            | torch.Size([2, 6, 256])          |
| 2790    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 41.1847763       | 0.7857045      | 12.0341806            | torch.Size([2, 6, 256])          |
| 2790    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.input_mean.mean   | output              | torch.float32 |         | 0.7128931         | 1.0289257        | 0.7857045      | 0.0134060             | torch.Size([2, 6, 1])            |
| 2791    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.5.sub               | input_0             | torch.float32 |         | 0.0000000         | 41.1847763       | 0.7857045      | 12.0341806            | torch.Size([2, 6, 256])          |
| 2791    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.5.sub               | input_1             | torch.float32 |         | 0.7128931         | 1.0289257        | 0.7857045      | 0.0134060             | torch.Size([2, 6, 1])            |
| 2791    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.camera_encoder.5.sub               | output              | torch.float32 |         | -1.0289257        | 40.4546089       | -0.0000000     | 12.0218868            | torch.Size([2, 6, 256])          |
| 2792    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.mul               | input_0             | torch.float32 |         | -1.0289257        | 40.4546089       | -0.0000000     | 12.0218868            | torch.Size([2, 6, 256])          |
| 2792    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.mul               | input_1             | torch.float32 |         | -1.0289257        | 40.4546089       | -0.0000000     | 12.0218868            | torch.Size([2, 6, 256])          |
| 2792    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.mul               | output              | torch.float32 |         | 0.0012388         | 1636.5754395     | 12.0179768     | 10223.7001953         | torch.Size([2, 6, 256])          |
| 2793    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.var_mean.mean     | input_0             | torch.float32 |         | 0.0012388         | 1636.5754395     | 12.0179768     | 10223.7001953         | torch.Size([2, 6, 256])          |
| 2793    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.38.camera_encoder.5.var_mean.mean     | output              | torch.float32 |         | 10.3371115        | 15.5415983       | 12.0179749     | 2.7748766             | torch.Size([2, 6, 1])            |
| 2794    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.38.camera_encoder.5.rsqrt             | input               | torch.float32 |         | 10.3371115        | 15.5415983       | 12.0179749     | 2.7748766             | torch.Size([2, 6, 1])            |
| 2794    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.38.camera_encoder.5.rsqrt             | output              | torch.float32 |         | 0.2536600         | 0.3110285        | 0.2901317      | 0.0003178             | torch.Size([2, 6, 1])            |
| 2795    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.out_mul           | input_0             | torch.float32 |         | -1.0289257        | 40.4546089       | -0.0000000     | 12.0218868            | torch.Size([2, 6, 256])          |
| 2795    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.out_mul           | input_1             | torch.float32 |         | 0.2536600         | 0.3110285        | 0.2901317      | 0.0003178             | torch.Size([2, 6, 1])            |
| 2795    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.out_mul           | output              | torch.float32 |         | -0.2627022        | 11.8028631       | 0.0000000      | 1.0003247             | torch.Size([2, 6, 256])          |
| 2796    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.5.weight_quant      | input               | torch.float32 |         | 0.4334703         | 1.5143329        | 0.8827897      | 0.0300007             | torch.Size([256])                |
| 2796    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.5.weight_quant      | output              | torch.float32 |         | 0.4334703         | 1.5143329        | 0.8827897      | 0.0300007             | torch.Size([256])                |
| 2797    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.weight_mul        | input_0             | torch.float32 |         | -0.2627022        | 11.8028631       | 0.0000000      | 1.0003247             | torch.Size([2, 6, 256])          |
| 2797    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.weight_mul        | input_1             | torch.float32 |         | 0.4334703         | 1.5143329        | 0.8827897      | 0.0300007             | torch.Size([256])                |
| 2797    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.camera_encoder.5.weight_mul        | output              | torch.float32 |         | -0.3978185        | 10.0599394       | -0.0292837     | 0.5638102             | torch.Size([2, 6, 256])          |
| 2798    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.5.bias_quant        | input               | torch.float32 |         | -0.7513186        | 0.5755784        | 0.0355008      | 0.0327518             | torch.Size([256])                |
| 2798    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.camera_encoder.5.bias_quant        | output              | torch.float32 |         | -0.7513186        | 0.5755784        | 0.0355008      | 0.0327518             | torch.Size([256])                |
| 2799    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.5.bias_add          | input_0             | torch.float32 |         | -0.3978185        | 10.0599394       | -0.0292837     | 0.5638102             | torch.Size([2, 6, 256])          |
| 2799    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.5.bias_add          | input_1             | torch.float32 |         | -0.7513186        | 0.5755784        | 0.0355008      | 0.0327518             | torch.Size([256])                |
| 2799    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.camera_encoder.5.bias_add          | output              | torch.float32 |         | -1.1324173        | 9.9904346        | 0.0062171      | 0.5503878             | torch.Size([2, 6, 256])          |
| 2800    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -6.2914047        | 7.7483292        | 0.0716618      | 1.4886400             | torch.Size([2, 512, 256])        |
| 2800    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -6.2914047        | 7.7483292        | 0.0716618      | 1.4886400             | torch.Size([2, 512, 1, 256])     |
| 2801    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -1.1324173        | 9.9904346        | 0.0062171      | 0.5503878             | torch.Size([2, 6, 256])          |
| 2801    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -1.1324173        | 9.9904346        | 0.0062171      | 0.5503878             | torch.Size([2, 1, 6, 256])       |
| 2802    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.cam_add                            | input_0             | torch.float32 |         | -6.2914047        | 7.7483292        | 0.0716618      | 1.4886400             | torch.Size([2, 512, 1, 256])     |
| 2802    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.cam_add                            | input_1             | torch.float32 |         | -1.1324173        | 9.9904346        | 0.0062171      | 0.5503878             | torch.Size([2, 1, 6, 256])       |
| 2802    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.38.cam_add                            | output              | torch.float32 |         | -5.1630440        | 11.0872154       | 0.0778789      | 1.5992510             | torch.Size([2, 512, 6, 256])     |
| 2803    | torch.nn.modules.linear.Linear                                                    | head.layers.38.weights_fc                         | input               | torch.float32 |         | -5.1630440        | 11.0872154       | 0.0778789      | 1.5992510             | torch.Size([2, 512, 6, 256])     |
| 2803    | torch.nn.modules.linear.Linear                                                    | head.layers.38.weights_fc                         | weight              | torch.float32 |         | -0.3664656        | 0.5587184        | 0.0007138      | 0.0033969             | torch.Size([64, 256])            |
| 2803    | torch.nn.modules.linear.Linear                                                    | head.layers.38.weights_fc                         | bias                | torch.float32 |         | -0.1132682        | 0.0694408        | -0.0024798     | 0.0018388             | torch.Size([64])                 |
| 2803    | torch.nn.modules.linear.Linear                                                    | head.layers.38.weights_fc                         | output              | torch.float32 |         | -11.1800766       | 8.5147581        | -0.0817066     | 6.0020499             | torch.Size([2, 512, 6, 64])      |
| 2804    | torch.Tensor.reshape                                                              | head.layers.38                                    | input_0             | torch.float32 |         | -11.1800766       | 8.5147581        | -0.0817066     | 6.0020499             | torch.Size([2, 512, 6, 64])      |
| 2804    | torch.Tensor.reshape                                                              | head.layers.38                                    | output              | torch.float32 |         | -11.1800766       | 8.5147581        | -0.0817066     | 6.0020499             | torch.Size([2, 512, 48, 8])      |
| 2805    | torch.Tensor.max                                                                  | head.layers.38.weight_softmax                     | input               | torch.float32 |         | -11.1800766       | 8.5147581        | -0.0817066     | 6.0020499             | torch.Size([2, 512, 48, 8])      |
| 2805    | torch.Tensor.max                                                                  | head.layers.38.weight_softmax                     | output_0            | torch.float32 |         | 1.7488453         | 8.5147581        | 3.7357345      | 1.4185029             | torch.Size([2, 512, 1, 8])       |
| 2805    | torch.Tensor.max                                                                  | head.layers.38.weight_softmax                     | output_1            | torch.int64   |         | 1.0000000         | 47.0000000       | 27.4937744     | 242.4709778           | torch.Size([2, 512, 1, 8])       |
| 2806    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.weight_softmax.sub                 | input_0             | torch.float32 |         | -11.1800766       | 8.5147581        | -0.0817066     | 6.0020499             | torch.Size([2, 512, 48, 8])      |
| 2806    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.weight_softmax.sub                 | input_1             | torch.float32 |         | 1.7488453         | 8.5147581        | 3.7357345      | 1.4185029             | torch.Size([2, 512, 1, 8])       |
| 2806    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.38.weight_softmax.sub                 | output              | torch.float32 |         | -14.8350029       | 0.0000000        | -3.8174415     | 6.7703271             | torch.Size([2, 512, 48, 8])      |
| 2807    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.38.weight_softmax.exp                 | input               | torch.float32 |         | -14.8350029       | 0.0000000        | -3.8174415     | 6.7703271             | torch.Size([2, 512, 48, 8])      |
| 2807    | horizon_plugin_pytorch.nn.segment_lut.SegmentLUT                                  | head.layers.38.weight_softmax.exp                 | output              | torch.float32 |         | 0.0000004         | 1.0000000        | 0.1565686      | 0.0625222             | torch.Size([2, 512, 48, 8])      |
| 2808    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.38.weight_softmax.sum                 | input               | torch.float32 |         | 0.0000004         | 1.0000000        | 0.1565686      | 0.0625222             | torch.Size([2, 512, 48, 8])      |
| 2808    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.38.weight_softmax.sum                 | output              | torch.float32 |         | 1.3792665         | 18.0474186       | 7.5152912      | 11.6758413            | torch.Size([2, 512, 1, 8])       |
| 2809    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.38.weight_softmax.reciprocal          | input               | torch.float32 |         | 1.3792665         | 18.0474186       | 7.5152912      | 11.6758413            | torch.Size([2, 512, 1, 8])       |
| 2809    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.38.weight_softmax.reciprocal          | output              | torch.float32 |         | 0.0554096         | 0.7250230        | 0.1758064      | 0.0124903             | torch.Size([2, 512, 1, 8])       |
| 2810    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.weight_softmax.mul                 | input_0             | torch.float32 |         | 0.0000004         | 1.0000000        | 0.1565686      | 0.0625222             | torch.Size([2, 512, 48, 8])      |
| 2810    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.weight_softmax.mul                 | input_1             | torch.float32 |         | 0.0554096         | 0.7250230        | 0.1758064      | 0.0124903             | torch.Size([2, 512, 1, 8])       |
| 2810    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.weight_softmax.mul                 | output              | torch.float32 |         | 0.0000001         | 0.7250230        | 0.0208333      | 0.0015656             | torch.Size([2, 512, 48, 8])      |
| 2811    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -60.0323524       | 58.7404404       | 0.6813018      | 293.5900879           | torch.Size([2, 512, 8, 3])       |
| 2811    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -46.2162094       | 52.1604652       | 1.3963438      | 311.7921143           | torch.Size([2, 512, 8, 1])       |
| 2812    | torch.ones_like                                                                   | head.layers.38                                    | input               | torch.float32 |         | -46.2162094       | 52.1604652       | 1.3963438      | 311.7921143           | torch.Size([2, 512, 8, 1])       |
| 2812    | torch.ones_like                                                                   | head.layers.38                                    | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2813    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.point_quant_stub                   | input               | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2813    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.38.point_quant_stub                   | output              | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2814    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.point_cat                          | input_0             | torch.float32 |         | -60.0323524       | 58.7404404       | 0.6813018      | 293.5900879           | torch.Size([2, 512, 8, 3])       |
| 2814    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.point_cat                          | input_1             | torch.float32 |         | 1.0000000         | 1.0000000        | 1.0000000      | 0.0000000             | torch.Size([2, 512, 8, 1])       |
| 2814    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.point_cat                          | output              | torch.float32 |         | -60.0323524       | 58.7404404       | 0.7609763      | 220.2093811           | torch.Size([2, 512, 8, 4])       |
| 2815    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 4, 4])         |
| 2815    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2816    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -60.0323524       | 58.7404404       | 0.7609763      | 220.2093811           | torch.Size([2, 512, 8, 4])       |
| 2816    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -60.0323524       | 58.7404404       | 0.7609763      | 220.2093811           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2817    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.point_matmul                       | input_0             | torch.float32 |         | -4.3784013        | 1.5829016        | -0.2683935     | 1.3634968             | torch.Size([2, 6, 1, 1, 4, 4])   |
| 2817    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.point_matmul                       | input_1             | torch.float32 |         | -60.0323524       | 58.7404404       | 0.7609763      | 220.2093811           | torch.Size([2, 1, 512, 8, 1, 4]) |
| 2817    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.point_matmul                       | output              | torch.float32 |         | -89.5975342       | 88.4200211       | 0.2691689      | 100.5775986           | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2818    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.38.point_sum                          | input               | torch.float32 |         | -89.5975342       | 88.4200211       | 0.2691689      | 100.5775986           | torch.Size([2, 6, 512, 8, 4, 4]) |
| 2818    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.38.point_sum                          | output              | torch.float32 |         | -93.1130600       | 93.2112579       | 1.0766757      | 395.1979065           | torch.Size([2, 6, 512, 8, 4])    |
| 2819    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -93.1130600       | 93.2112579       | 1.0766757      | 395.1979065           | torch.Size([2, 6, 512, 8, 4])    |
| 2819    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -60.8665009       | 59.1294556       | -0.5565716     | 434.2742310           | torch.Size([2, 6, 512, 8, 1])    |
| 2820    | torch.clamp                                                                       | head.layers.38                                    | input               | torch.float32 |         | -60.8665009       | 59.1294556       | -0.5565716     | 434.2742310           | torch.Size([2, 6, 512, 8, 1])    |
| 2820    | torch.clamp                                                                       | head.layers.38                                    | output              | torch.float32 |         | 0.0000100         | 59.1294556       | 7.5372176      | 153.1957703           | torch.Size([2, 6, 512, 8, 1])    |
| 2821    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.38.reciprocal_op                      | input               | torch.float32 |         | 0.0000100         | 59.1294556       | 7.5372176      | 153.1957703           | torch.Size([2, 6, 512, 8, 1])    |
| 2821    | horizon_plugin_pytorch.nn.reciprocal.Reciprocal                                   | head.layers.38.reciprocal_op                      | output              | torch.float32 |         | 0.0169120         | 100000.0000000   | 51817.1679688  | 2496713216.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 2822    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | -93.1130600       | 93.2112579       | 1.0766757      | 395.1979065           | torch.Size([2, 6, 512, 8, 4])    |
| 2822    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | -93.1130600       | 93.2112579       | 1.9316368      | 571.1972046           | torch.Size([2, 6, 512, 8, 2])    |
| 2823    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.point_mul                          | input_0             | torch.float32 |         | -93.1130600       | 93.2112579       | 1.9316368      | 571.1972046           | torch.Size([2, 6, 512, 8, 2])    |
| 2823    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.point_mul                          | input_1             | torch.float32 |         | 0.0169120         | 100000.0000000   | 51817.1679688  | 2496713216.0000000    | torch.Size([2, 6, 512, 8, 1])    |
| 2823    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.point_mul                          | output              | torch.float32 |         | -9262455.0000000  | 9321126.0000000  | 244471.6250000 | 2912428752896.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 2824    | torch.Tensor.flatten                                                              | head.layers.38                                    | input               | torch.float32 |         | -9262455.0000000  | 9321126.0000000  | 244471.6250000 | 2912428752896.0000000 | torch.Size([2, 6, 512, 8, 2])    |
| 2824    | torch.Tensor.flatten                                                              | head.layers.38                                    | output              | torch.float32 |         | -9262455.0000000  | 9321126.0000000  | 244471.6250000 | 2912428752896.0000000 | torch.Size([12, 512, 8, 2])      |
| 2825    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.38                                    | input_0             | torch.float32 |         | -44.8620338       | 31.9191360       | 0.1436918      | 20.2713203            | torch.Size([12, 256, 16, 44])    |
| 2825    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.38                                    | input_1             | torch.float32 |         | -9262455.0000000  | 9321126.0000000  | 244471.6250000 | 2912428752896.0000000 | torch.Size([12, 512, 8, 2])      |
| 2825    | horizon_plugin_pytorch.nn.grid_sample.autocasted_grid_sample                      | head.layers.38                                    | output              | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([12, 256, 512, 8])    |
| 2826    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.feat_cat                           | input               | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([12, 256, 512, 8])    |
| 2826    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.feat_cat                           | output              | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([12, 256, 512, 8])    |
| 2827    | torch.Tensor.view                                                                 | head.layers.38                                    | input_0             | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([12, 256, 512, 8])    |
| 2827    | torch.Tensor.view                                                                 | head.layers.38                                    | output              | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 6, 256, 512, 8])  |
| 2828    | torch.Tensor.permute                                                              | head.layers.38                                    | input_0             | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 6, 256, 512, 8])  |
| 2828    | torch.Tensor.permute                                                              | head.layers.38                                    | output              | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 6, 8, 256])  |
| 2829    | torch.Tensor.contiguous                                                           | head.layers.38                                    | input               | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 6, 8, 256])  |
| 2829    | torch.Tensor.contiguous                                                           | head.layers.38                                    | output              | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 6, 8, 256])  |
| 2830    | torch.Tensor.view                                                                 | head.layers.38                                    | input_0             | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 6, 8, 256])  |
| 2830    | torch.Tensor.view                                                                 | head.layers.38                                    | output              | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 48, 256])    |
| 2831    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | input_0             | torch.float32 |         | 0.0000001         | 0.7250230        | 0.0208333      | 0.0015656             | torch.Size([2, 512, 48, 8])      |
| 2831    | torch.Tensor.__getitem__                                                          | head.layers.38                                    | output              | torch.float32 |         | 0.0000001         | 0.7250230        | 0.0208333      | 0.0015656             | torch.Size([2, 512, 48, 8, 1])   |
| 2832    | torch.Tensor.reshape                                                              | head.layers.38                                    | input_0             | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 48, 256])    |
| 2832    | torch.Tensor.reshape                                                              | head.layers.38                                    | output              | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 48, 8, 32])  |
| 2833    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.feat_mul                           | input_0             | torch.float32 |         | 0.0000001         | 0.7250230        | 0.0208333      | 0.0015656             | torch.Size([2, 512, 48, 8, 1])   |
| 2833    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.feat_mul                           | input_1             | torch.float32 |         | -44.1140976       | 31.2801590       | 0.0257414      | 2.9703665             | torch.Size([2, 512, 48, 8, 32])  |
| 2833    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.38.feat_mul                           | output              | torch.float32 |         | -2.9716475        | 2.7749150        | 0.0003407      | 0.0023578             | torch.Size([2, 512, 48, 8, 32])  |
| 2834    | torch.Tensor.view                                                                 | head.layers.38                                    | input_0             | torch.float32 |         | -2.9716475        | 2.7749150        | 0.0003407      | 0.0023578             | torch.Size([2, 512, 48, 8, 32])  |
| 2834    | torch.Tensor.view                                                                 | head.layers.38                                    | output              | torch.float32 |         | -2.9716475        | 2.7749150        | 0.0003407      | 0.0023578             | torch.Size([2, 512, 48, 256])    |
| 2835    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.38.feat_sum                           | input               | torch.float32 |         | -2.9716475        | 2.7749150        | 0.0003407      | 0.0023578             | torch.Size([2, 512, 48, 256])    |
| 2835    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sum        | head.layers.38.feat_sum                           | output              | torch.float32 |         | -5.6449471        | 4.6186924        | 0.0163533      | 0.2793331             | torch.Size([2, 512, 256])        |
| 2836    | torch.nn.modules.linear.Linear                                                    | head.layers.38.output_proj                        | input               | torch.float32 |         | -5.6449471        | 4.6186924        | 0.0163533      | 0.2793331             | torch.Size([2, 512, 256])        |
| 2836    | torch.nn.modules.linear.Linear                                                    | head.layers.38.output_proj                        | weight              | torch.float32 |         | -0.3224856        | 0.3687426        | 0.0000557      | 0.0083070             | torch.Size([256, 256])           |
| 2836    | torch.nn.modules.linear.Linear                                                    | head.layers.38.output_proj                        | bias                | torch.float32 |         | -0.0892059        | 0.1071169        | 0.0013445      | 0.0012537             | torch.Size([256])                |
| 2836    | torch.nn.modules.linear.Linear                                                    | head.layers.38.output_proj                        | output              | torch.float32 |         | -6.3306890        | 10.0671234       | 0.0523732      | 0.7328323             | torch.Size([2, 512, 256])        |
| 2837    | torch.nn.modules.dropout.Dropout                                                  | head.layers.38.proj_drop                          | input               | torch.float32 |         | -6.3306890        | 10.0671234       | 0.0523732      | 0.7328323             | torch.Size([2, 512, 256])        |
| 2837    | torch.nn.modules.dropout.Dropout                                                  | head.layers.38.proj_drop                          | output              | torch.float32 |         | -6.3306890        | 10.0671234       | 0.0523732      | 0.7328323             | torch.Size([2, 512, 256])        |
| 2838    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.residual_op                        | input_0             | torch.float32 |         | -6.3306890        | 10.0671234       | 0.0523732      | 0.7328323             | torch.Size([2, 512, 256])        |
| 2838    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.residual_op                        | input_1             | torch.float32 |         | -5.9887576        | 4.3753686        | 0.0109376      | 0.6464748             | torch.Size([2, 512, 256])        |
| 2838    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.cat        | head.layers.38.residual_op                        | output              | torch.float32 |         | -6.3306890        | 10.0671234       | 0.0316554      | 0.6900815             | torch.Size([2, 512, 512])        |
| 2839    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.input_mean.mean           | input_0             | torch.float32 |         | -6.3306890        | 10.0671234       | 0.0316554      | 0.6900815             | torch.Size([2, 512, 512])        |
| 2839    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.input_mean.mean           | output              | torch.float32 |         | -0.0140809        | 0.1437494        | 0.0316554      | 0.0004316             | torch.Size([2, 512, 1])          |
| 2840    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.39.pre_norm.sub                       | input_0             | torch.float32 |         | -6.3306890        | 10.0671234       | 0.0316554      | 0.6900815             | torch.Size([2, 512, 512])        |
| 2840    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.39.pre_norm.sub                       | input_1             | torch.float32 |         | -0.0140809        | 0.1437494        | 0.0316554      | 0.0004316             | torch.Size([2, 512, 1])          |
| 2840    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.39.pre_norm.sub                       | output              | torch.float32 |         | -6.4506383        | 9.9233742        | -0.0000000     | 0.6896504             | torch.Size([2, 512, 512])        |
| 2841    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.mul                       | input_0             | torch.float32 |         | -6.4506383        | 9.9233742        | -0.0000000     | 0.6896504             | torch.Size([2, 512, 512])        |
| 2841    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.mul                       | input_1             | torch.float32 |         | -6.4506383        | 9.9233742        | -0.0000000     | 0.6896504             | torch.Size([2, 512, 512])        |
| 2841    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.mul                       | output              | torch.float32 |         | 0.0000000         | 98.4733582       | 0.6896490      | 4.9016457             | torch.Size([2, 512, 512])        |
| 2842    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 98.4733582       | 0.6896490      | 4.9016457             | torch.Size([2, 512, 512])        |
| 2842    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.39.pre_norm.var_mean.mean             | output              | torch.float32 |         | 0.3363842         | 3.0608270        | 0.6896490      | 0.0869332             | torch.Size([2, 512, 1])          |
| 2843    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.39.pre_norm.rsqrt                     | input               | torch.float32 |         | 0.3363842         | 3.0608270        | 0.6896490      | 0.0869332             | torch.Size([2, 512, 1])          |
| 2843    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.39.pre_norm.rsqrt                     | output              | torch.float32 |         | 0.5715838         | 1.7241526        | 1.2720077      | 0.0558800             | torch.Size([2, 512, 1])          |
| 2844    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.out_mul                   | input_0             | torch.float32 |         | -6.4506383        | 9.9233742        | -0.0000000     | 0.6896504             | torch.Size([2, 512, 512])        |
| 2844    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.out_mul                   | input_1             | torch.float32 |         | 0.5715838         | 1.7241526        | 1.2720077      | 0.0558800             | torch.Size([2, 512, 1])          |
| 2844    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.out_mul                   | output              | torch.float32 |         | -9.0117321        | 6.9419546        | -0.0000000     | 0.9999852             | torch.Size([2, 512, 512])        |
| 2845    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.39.pre_norm.weight_quant              | input               | torch.float32 |         | 0.6608862         | 1.4900941        | 0.9789718      | 0.0452766             | torch.Size([512])                |
| 2845    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.39.pre_norm.weight_quant              | output              | torch.float32 |         | 0.6608862         | 1.4900941        | 0.9789718      | 0.0452766             | torch.Size([512])                |
| 2846    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.weight_mul                | input_0             | torch.float32 |         | -9.0117321        | 6.9419546        | -0.0000000     | 0.9999852             | torch.Size([2, 512, 512])        |
| 2846    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.weight_mul                | input_1             | torch.float32 |         | 0.6608862         | 1.4900941        | 0.9789718      | 0.0452766             | torch.Size([512])                |
| 2846    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.39.pre_norm.weight_mul                | output              | torch.float32 |         | -6.3561697        | 5.7942061        | 0.0025306      | 0.7595325             | torch.Size([2, 512, 512])        |
| 2847    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.39.pre_norm.bias_quant                | input               | torch.float32 |         | -0.1679264        | 0.1870694        | 0.0026524      | 0.0032695             | torch.Size([512])                |
| 2847    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.39.pre_norm.bias_quant                | output              | torch.float32 |         | -0.1679264        | 0.1870694        | 0.0026524      | 0.0032695             | torch.Size([512])                |
| 2848    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.39.pre_norm.bias_add                  | input_0             | torch.float32 |         | -6.3561697        | 5.7942061        | 0.0025306      | 0.7595325             | torch.Size([2, 512, 512])        |
| 2848    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.39.pre_norm.bias_add                  | input_1             | torch.float32 |         | -0.1679264        | 0.1870694        | 0.0026524      | 0.0032695             | torch.Size([512])                |
| 2848    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.39.pre_norm.bias_add                  | output              | torch.float32 |         | -6.2420440        | 5.8512201        | 0.0051830      | 0.7623965             | torch.Size([2, 512, 512])        |
| 2849    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.0.0                         | input               | torch.float32 |         | -6.2420440        | 5.8512201        | 0.0051830      | 0.7623965             | torch.Size([2, 512, 512])        |
| 2849    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.0.0                         | weight              | torch.float32 |         | -0.5392269        | 0.4812456        | -0.0005245     | 0.0077121             | torch.Size([1024, 512])          |
| 2849    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.0.0                         | bias                | torch.float32 |         | -0.1937473        | 0.0078548        | -0.0795463     | 0.0012755             | torch.Size([1024])               |
| 2849    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.0.0                         | output              | torch.float32 |         | -19.0362625       | 13.6141090       | -3.8522954     | 9.6368093             | torch.Size([2, 512, 1024])       |
| 2850    | torch.nn.modules.activation.ReLU                                                  | head.layers.39.activate                           | input               | torch.float32 |         | 0.0000000         | 13.6141090       | 0.1740784      | 0.4826890             | torch.Size([2, 512, 1024])       |
| 2850    | torch.nn.modules.activation.ReLU                                                  | head.layers.39.activate                           | output              | torch.float32 |         | 0.0000000         | 13.6141090       | 0.1740784      | 0.4826890             | torch.Size([2, 512, 1024])       |
| 2851    | torch.nn.modules.dropout.Dropout                                                  | head.layers.39.layers.0.2                         | input               | torch.float32 |         | 0.0000000         | 13.6141090       | 0.1740784      | 0.4826890             | torch.Size([2, 512, 1024])       |
| 2851    | torch.nn.modules.dropout.Dropout                                                  | head.layers.39.layers.0.2                         | output              | torch.float32 |         | 0.0000000         | 13.6141090       | 0.1740784      | 0.4826890             | torch.Size([2, 512, 1024])       |
| 2852    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.1                           | input               | torch.float32 |         | 0.0000000         | 13.6141090       | 0.1740784      | 0.4826890             | torch.Size([2, 512, 1024])       |
| 2852    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.1                           | weight              | torch.float32 |         | -0.5038874        | 0.5895149        | 0.0001352      | 0.0091717             | torch.Size([256, 1024])          |
| 2852    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.1                           | bias                | torch.float32 |         | -0.0698264        | 0.0842768        | -0.0005476     | 0.0007709             | torch.Size([256])                |
| 2852    | torch.nn.modules.linear.Linear                                                    | head.layers.39.layers.1                           | output              | torch.float32 |         | -25.1973553       | 22.9374504       | -0.0515369     | 21.0605583            | torch.Size([2, 512, 256])        |
| 2853    | torch.nn.modules.dropout.Dropout                                                  | head.layers.39.layers.2                           | input               | torch.float32 |         | -25.1973553       | 22.9374504       | -0.0515369     | 21.0605583            | torch.Size([2, 512, 256])        |
| 2853    | torch.nn.modules.dropout.Dropout                                                  | head.layers.39.layers.2                           | output              | torch.float32 |         | -25.1973553       | 22.9374504       | -0.0515369     | 21.0605583            | torch.Size([2, 512, 256])        |
| 2854    | torch.nn.modules.linear.Linear                                                    | head.layers.39.identity_fc                        | input               | torch.float32 |         | -6.2420440        | 5.8512201        | 0.0051830      | 0.7623965             | torch.Size([2, 512, 512])        |
| 2854    | torch.nn.modules.linear.Linear                                                    | head.layers.39.identity_fc                        | weight              | torch.float32 |         | -0.4967276        | 0.4735355        | -0.0000963     | 0.0086209             | torch.Size([256, 512])           |
| 2854    | torch.nn.modules.linear.Linear                                                    | head.layers.39.identity_fc                        | bias                | torch.float32 |         | -0.1381557        | 0.0822432        | -0.0011134     | 0.0011628             | torch.Size([256])                |
| 2854    | torch.nn.modules.linear.Linear                                                    | head.layers.39.identity_fc                        | output              | torch.float32 |         | -17.8483639       | 25.3368950       | -0.0015241     | 16.2353344            | torch.Size([2, 512, 256])        |
| 2855    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.39.short_add                          | input_0             | torch.float32 |         | -17.8483639       | 25.3368950       | -0.0015241     | 16.2353344            | torch.Size([2, 512, 256])        |
| 2855    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.39.short_add                          | input_1             | torch.float32 |         | -25.1973553       | 22.9374504       | -0.0515369     | 21.0605583            | torch.Size([2, 512, 256])        |
| 2855    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.39.short_add                          | output              | torch.float32 |         | -33.2234840       | 31.3515987       | -0.0530611     | 52.9195061            | torch.Size([2, 512, 256])        |
| 2856    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.40.input_mean.mean                    | input_0             | torch.float32 |         | -33.2234840       | 31.3515987       | -0.0530611     | 52.9195061            | torch.Size([2, 512, 256])        |
| 2856    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.40.input_mean.mean                    | output              | torch.float32 |         | -0.2993124        | 0.1671602        | -0.0530611     | 0.0075973             | torch.Size([2, 512, 1])          |
| 2857    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.40.sub                                | input_0             | torch.float32 |         | -33.2234840       | 31.3515987       | -0.0530611     | 52.9195061            | torch.Size([2, 512, 256])        |
| 2857    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.40.sub                                | input_1             | torch.float32 |         | -0.2993124        | 0.1671602        | -0.0530611     | 0.0075973             | torch.Size([2, 512, 1])          |
| 2857    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.40.sub                                | output              | torch.float32 |         | -33.0735397       | 31.5015430       | 0.0000000      | 52.9119148            | torch.Size([2, 512, 256])        |
| 2858    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.mul                                | input_0             | torch.float32 |         | -33.0735397       | 31.5015430       | 0.0000000      | 52.9119148            | torch.Size([2, 512, 256])        |
| 2858    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.mul                                | input_1             | torch.float32 |         | -33.0735397       | 31.5015430       | 0.0000000      | 52.9119148            | torch.Size([2, 512, 256])        |
| 2858    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.mul                                | output              | torch.float32 |         | 0.0000000         | 1093.8590088     | 52.9117126     | 11278.0859375         | torch.Size([2, 512, 256])        |
| 2859    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.40.var_mean.mean                      | input_0             | torch.float32 |         | 0.0000000         | 1093.8590088     | 52.9117126     | 11278.0859375         | torch.Size([2, 512, 256])        |
| 2859    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.40.var_mean.mean                      | output              | torch.float32 |         | 10.2435026        | 161.4502869      | 52.9117126     | 2463.3830566          | torch.Size([2, 512, 1])          |
| 2860    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.40.rsqrt                              | input               | torch.float32 |         | 10.2435026        | 161.4502869      | 52.9117126     | 2463.3830566          | torch.Size([2, 512, 1])          |
| 2860    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.40.rsqrt                              | output              | torch.float32 |         | 0.0787011         | 0.3124464        | 0.1796302      | 0.0041905             | torch.Size([2, 512, 1])          |
| 2861    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.out_mul                            | input_0             | torch.float32 |         | -33.0735397       | 31.5015430       | 0.0000000      | 52.9119148            | torch.Size([2, 512, 256])        |
| 2861    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.out_mul                            | input_1             | torch.float32 |         | 0.0787011         | 0.3124464        | 0.1796302      | 0.0041905             | torch.Size([2, 512, 1])          |
| 2861    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.out_mul                            | output              | torch.float32 |         | -4.6973209        | 5.6719947        | 0.0000000      | 1.0000035             | torch.Size([2, 512, 256])        |
| 2862    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.40.weight_quant                       | input               | torch.float32 |         | 0.3611936         | 1.1129279        | 0.8462322      | 0.0141788             | torch.Size([256])                |
| 2862    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.40.weight_quant                       | output              | torch.float32 |         | 0.3611936         | 1.1129279        | 0.8462322      | 0.0141788             | torch.Size([256])                |
| 2863    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.weight_mul                         | input_0             | torch.float32 |         | -4.6973209        | 5.6719947        | 0.0000000      | 1.0000035             | torch.Size([2, 512, 256])        |
| 2863    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.weight_mul                         | input_1             | torch.float32 |         | 0.3611936         | 1.1129279        | 0.8462322      | 0.0141788             | torch.Size([256])                |
| 2863    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.40.weight_mul                         | output              | torch.float32 |         | -4.2927213        | 4.7019234        | 0.0016851      | 0.7367631             | torch.Size([2, 512, 256])        |
| 2864    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.40.bias_quant                         | input               | torch.float32 |         | -0.1068868        | 0.1063567        | 0.0003906      | 0.0013848             | torch.Size([256])                |
| 2864    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.40.bias_quant                         | output              | torch.float32 |         | -0.1068868        | 0.1063567        | 0.0003906      | 0.0013848             | torch.Size([256])                |
| 2865    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.40.bias_add                           | input_0             | torch.float32 |         | -4.2927213        | 4.7019234        | 0.0016851      | 0.7367631             | torch.Size([2, 512, 256])        |
| 2865    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.40.bias_add                           | input_1             | torch.float32 |         | -0.1068868        | 0.1063567        | 0.0003906      | 0.0013848             | torch.Size([256])                |
| 2865    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.40.bias_add                           | output              | torch.float32 |         | -4.2215729        | 4.6305103        | 0.0020757      | 0.7369147             | torch.Size([2, 512, 256])        |
| 2866    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.add1                               | input_0             | torch.float32 |         | -4.2215729        | 4.6305103        | 0.0020757      | 0.7369147             | torch.Size([2, 512, 256])        |
| 2866    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.add1                               | input_1             | torch.float32 |         | -1.7223351        | 7.3336802        | 0.0607242      | 0.9015698             | torch.Size([2, 512, 256])        |
| 2866    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.add1                               | output              | torch.float32 |         | -4.7876415        | 8.8668385        | 0.0627999      | 1.4825734             | torch.Size([2, 512, 256])        |
| 2867    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.0                           | input               | torch.float32 |         | -4.7876415        | 8.8668385        | 0.0627999      | 1.4825734             | torch.Size([2, 512, 256])        |
| 2867    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.0                           | weight              | torch.float32 |         | -0.9671087        | 1.0510615        | 0.0000745      | 0.0080127             | torch.Size([256, 256])           |
| 2867    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.0                           | bias                | torch.float32 |         | -0.2240563        | 0.0783759        | -0.0502922     | 0.0026307             | torch.Size([256])                |
| 2867    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.0                           | output              | torch.float32 |         | -13.5345583       | 9.8166761        | -1.4694917     | 5.0613594             | torch.Size([2, 512, 256])        |
| 2868    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.1                           | input               | torch.float32 |         | 0.0000000         | 9.8166761        | 0.3138902      | 0.5680947             | torch.Size([2, 512, 256])        |
| 2868    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.1                           | output              | torch.float32 |         | 0.0000000         | 9.8166761        | 0.3138902      | 0.5680947             | torch.Size([2, 512, 256])        |
| 2869    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.2                           | input               | torch.float32 |         | 0.0000000         | 9.8166761        | 0.3138902      | 0.5680947             | torch.Size([2, 512, 256])        |
| 2869    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.2                           | weight              | torch.float32 |         | -0.7024922        | 0.4782098        | -0.0114104     | 0.0081320             | torch.Size([256, 256])           |
| 2869    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.2                           | bias                | torch.float32 |         | -0.1883502        | 0.2478070        | -0.0179733     | 0.0065595             | torch.Size([256])                |
| 2869    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.2                           | output              | torch.float32 |         | -14.3049898       | 8.0055408        | -0.9063557     | 2.7112732             | torch.Size([2, 512, 256])        |
| 2870    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.3                           | input               | torch.float32 |         | 0.0000000         | 8.0055408        | 0.2648471      | 0.3220307             | torch.Size([2, 512, 256])        |
| 2870    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.3                           | output              | torch.float32 |         | 0.0000000         | 8.0055408        | 0.2648471      | 0.3220307             | torch.Size([2, 512, 256])        |
| 2871    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 8.0055408        | 0.2648471      | 0.3220307             | torch.Size([2, 512, 256])        |
| 2871    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.input_mean.mean           | output              | torch.float32 |         | 0.1138756         | 0.5534642        | 0.2648471      | 0.0037193             | torch.Size([2, 512, 1])          |
| 2872    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.layers.4.sub                       | input_0             | torch.float32 |         | 0.0000000         | 8.0055408        | 0.2648471      | 0.3220307             | torch.Size([2, 512, 256])        |
| 2872    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.layers.4.sub                       | input_1             | torch.float32 |         | 0.1138756         | 0.5534642        | 0.2648471      | 0.0037193             | torch.Size([2, 512, 1])          |
| 2872    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.layers.4.sub                       | output              | torch.float32 |         | -0.5534642        | 7.4520764        | -0.0000000     | 0.3183150             | torch.Size([2, 512, 256])        |
| 2873    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.mul                       | input_0             | torch.float32 |         | -0.5534642        | 7.4520764        | -0.0000000     | 0.3183150             | torch.Size([2, 512, 256])        |
| 2873    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.mul                       | input_1             | torch.float32 |         | -0.5534642        | 7.4520764        | -0.0000000     | 0.3183150             | torch.Size([2, 512, 256])        |
| 2873    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.mul                       | output              | torch.float32 |         | 0.0000000         | 55.5334435       | 0.3183138      | 1.1256042             | torch.Size([2, 512, 256])        |
| 2874    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 55.5334435       | 0.3183138      | 1.1256042             | torch.Size([2, 512, 256])        |
| 2874    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.4.var_mean.mean             | output              | torch.float32 |         | 0.0755139         | 1.6603931        | 0.3183138      | 0.0237166             | torch.Size([2, 512, 1])          |
| 2875    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.layers.4.rsqrt                     | input               | torch.float32 |         | 0.0755139         | 1.6603931        | 0.3183138      | 0.0237166             | torch.Size([2, 512, 1])          |
| 2875    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.layers.4.rsqrt                     | output              | torch.float32 |         | 0.7760563         | 3.6387966        | 1.8988681      | 0.1618767             | torch.Size([2, 512, 1])          |
| 2876    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.out_mul                   | input_0             | torch.float32 |         | -0.5534642        | 7.4520764        | -0.0000000     | 0.3183150             | torch.Size([2, 512, 256])        |
| 2876    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.out_mul                   | input_1             | torch.float32 |         | 0.7760563         | 3.6387966        | 1.8988681      | 0.1618767             | torch.Size([2, 512, 1])          |
| 2876    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.out_mul                   | output              | torch.float32 |         | -0.5837125        | 8.1221704        | -0.0000000     | 0.9999661             | torch.Size([2, 512, 256])        |
| 2877    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.4.weight_quant              | input               | torch.float32 |         | 0.7249702         | 1.1691658        | 0.9793198      | 0.0052795             | torch.Size([256])                |
| 2877    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.4.weight_quant              | output              | torch.float32 |         | 0.7249702         | 1.1691658        | 0.9793198      | 0.0052795             | torch.Size([256])                |
| 2878    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.weight_mul                | input_0             | torch.float32 |         | -0.5837125        | 8.1221704        | -0.0000000     | 0.9999661             | torch.Size([2, 512, 256])        |
| 2878    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.weight_mul                | input_1             | torch.float32 |         | 0.7249702         | 1.1691658        | 0.9793198      | 0.0052795             | torch.Size([256])                |
| 2878    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.4.weight_mul                | output              | torch.float32 |         | -0.6824567        | 9.0963345        | -0.0020998     | 0.9584438             | torch.Size([2, 512, 256])        |
| 2879    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.4.bias_quant                | input               | torch.float32 |         | -0.1581516        | 0.2960921        | 0.0620406      | 0.0084647             | torch.Size([256])                |
| 2879    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.4.bias_quant                | output              | torch.float32 |         | -0.1581516        | 0.2960921        | 0.0620406      | 0.0084647             | torch.Size([256])                |
| 2880    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.layers.4.bias_add                  | input_0             | torch.float32 |         | -0.6824567        | 9.0963345        | -0.0020998     | 0.9584438             | torch.Size([2, 512, 256])        |
| 2880    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.layers.4.bias_add                  | input_1             | torch.float32 |         | -0.1581516        | 0.2960921        | 0.0620406      | 0.0084647             | torch.Size([256])                |
| 2880    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.layers.4.bias_add                  | output              | torch.float32 |         | -0.7020136        | 9.1232014        | 0.0599408      | 0.9177279             | torch.Size([2, 512, 256])        |
| 2881    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.5                           | input               | torch.float32 |         | -0.7020136        | 9.1232014        | 0.0599408      | 0.9177279             | torch.Size([2, 512, 256])        |
| 2881    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.5                           | weight              | torch.float32 |         | -0.5549148        | 0.5088162        | 0.0022494      | 0.0062838             | torch.Size([256, 256])           |
| 2881    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.5                           | bias                | torch.float32 |         | -0.1952680        | 0.0616404        | -0.0414291     | 0.0020574             | torch.Size([256])                |
| 2881    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.5                           | output              | torch.float32 |         | -8.8311472        | 11.4997749       | -0.8631141     | 3.2248311             | torch.Size([2, 512, 256])        |
| 2882    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.6                           | input               | torch.float32 |         | 0.0000000         | 11.4997749       | 0.3742267      | 0.6564142             | torch.Size([2, 512, 256])        |
| 2882    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.6                           | output              | torch.float32 |         | 0.0000000         | 11.4997749       | 0.3742267      | 0.6564142             | torch.Size([2, 512, 256])        |
| 2883    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.7                           | input               | torch.float32 |         | 0.0000000         | 11.4997749       | 0.3742267      | 0.6564142             | torch.Size([2, 512, 256])        |
| 2883    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.7                           | weight              | torch.float32 |         | -0.4746350        | 0.4722923        | -0.0083948     | 0.0048993             | torch.Size([256, 256])           |
| 2883    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.7                           | bias                | torch.float32 |         | -0.1257761        | 0.3425134        | -0.0276972     | 0.0021361             | torch.Size([256])                |
| 2883    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.7                           | output              | torch.float32 |         | -9.2673883        | 29.4953556       | -1.2751063     | 3.3067036             | torch.Size([2, 512, 256])        |
| 2884    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.8                           | input               | torch.float32 |         | 0.0000000         | 29.4953556       | 0.2443367      | 1.1661458             | torch.Size([2, 512, 256])        |
| 2884    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.layers.8                           | output              | torch.float32 |         | 0.0000000         | 29.4953556       | 0.2443367      | 1.1661458             | torch.Size([2, 512, 256])        |
| 2885    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.input_mean.mean           | input_0             | torch.float32 |         | 0.0000000         | 29.4953556       | 0.2443367      | 1.1661458             | torch.Size([2, 512, 256])        |
| 2885    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.input_mean.mean           | output              | torch.float32 |         | 0.0807026         | 0.8739437        | 0.2443367      | 0.0070303             | torch.Size([2, 512, 1])          |
| 2886    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.layers.9.sub                       | input_0             | torch.float32 |         | 0.0000000         | 29.4953556       | 0.2443367      | 1.1661458             | torch.Size([2, 512, 256])        |
| 2886    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.layers.9.sub                       | input_1             | torch.float32 |         | 0.0807026         | 0.8739437        | 0.2443367      | 0.0070303             | torch.Size([2, 512, 1])          |
| 2886    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.layers.9.sub                       | output              | torch.float32 |         | -0.8739437        | 29.2466908       | 0.0000000      | 1.1591222             | torch.Size([2, 512, 256])        |
| 2887    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.mul                       | input_0             | torch.float32 |         | -0.8739437        | 29.2466908       | 0.0000000      | 1.1591222             | torch.Size([2, 512, 256])        |
| 2887    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.mul                       | input_1             | torch.float32 |         | -0.8739437        | 29.2466908       | 0.0000000      | 1.1591222             | torch.Size([2, 512, 256])        |
| 2887    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.mul                       | output              | torch.float32 |         | 0.0000000         | 855.3688965      | 1.1591179      | 298.6749573           | torch.Size([2, 512, 256])        |
| 2888    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.var_mean.mean             | input_0             | torch.float32 |         | 0.0000000         | 855.3688965      | 1.1591179      | 298.6749573           | torch.Size([2, 512, 256])        |
| 2888    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.layers.9.var_mean.mean             | output              | torch.float32 |         | 0.1441885         | 4.4082236        | 1.1591179      | 0.4801159             | torch.Size([2, 512, 1])          |
| 2889    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.layers.9.rsqrt                     | input               | torch.float32 |         | 0.1441885         | 4.4082236        | 1.1591179      | 0.4801159             | torch.Size([2, 512, 1])          |
| 2889    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.layers.9.rsqrt                     | output              | torch.float32 |         | 0.4762859         | 2.6334169        | 1.0641514      | 0.1077393             | torch.Size([2, 512, 1])          |
| 2890    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.out_mul                   | input_0             | torch.float32 |         | -0.8739437        | 29.2466908       | 0.0000000      | 1.1591222             | torch.Size([2, 512, 256])        |
| 2890    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.out_mul                   | input_1             | torch.float32 |         | 0.4762859         | 2.6334169        | 1.0641514      | 0.1077393             | torch.Size([2, 512, 1])          |
| 2890    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.out_mul                   | output              | torch.float32 |         | -0.5343153        | 15.8070784       | 0.0000000      | 0.9999914             | torch.Size([2, 512, 256])        |
| 2891    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.9.weight_quant              | input               | torch.float32 |         | 0.6879961         | 1.2603064        | 0.9672197      | 0.0079379             | torch.Size([256])                |
| 2891    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.9.weight_quant              | output              | torch.float32 |         | 0.6879961         | 1.2603064        | 0.9672197      | 0.0079379             | torch.Size([256])                |
| 2892    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.weight_mul                | input_0             | torch.float32 |         | -0.5343153        | 15.8070784       | 0.0000000      | 0.9999914             | torch.Size([2, 512, 256])        |
| 2892    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.weight_mul                | input_1             | torch.float32 |         | 0.6879961         | 1.2603064        | 0.9672197      | 0.0079379             | torch.Size([256])                |
| 2892    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.9.weight_mul                | output              | torch.float32 |         | -0.6734009        | 10.8752089       | -0.0110920     | 0.6716928             | torch.Size([2, 512, 256])        |
| 2893    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.9.bias_quant                | input               | torch.float32 |         | -0.2941498        | 0.1362485        | 0.0674987      | 0.0034837             | torch.Size([256])                |
| 2893    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.9.bias_quant                | output              | torch.float32 |         | -0.2941498        | 0.1362485        | 0.0674987      | 0.0034837             | torch.Size([256])                |
| 2894    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.layers.9.bias_add                  | input_0             | torch.float32 |         | -0.6734009        | 10.8752089       | -0.0110920     | 0.6716928             | torch.Size([2, 512, 256])        |
| 2894    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.layers.9.bias_add                  | input_1             | torch.float32 |         | -0.2941498        | 0.1362485        | 0.0674987      | 0.0034837             | torch.Size([256])                |
| 2894    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.layers.9.bias_add                  | output              | torch.float32 |         | -0.6617566        | 10.5810595       | 0.0564067      | 0.6305786             | torch.Size([2, 512, 256])        |
| 2895    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.10                          | input               | torch.float32 |         | -0.6617566        | 10.5810595       | 0.0564067      | 0.6305786             | torch.Size([2, 512, 256])        |
| 2895    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.10                          | weight              | torch.float32 |         | -0.5602010        | 0.3975652        | -0.0010181     | 0.0060089             | torch.Size([11, 256])            |
| 2895    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.10                          | bias                | torch.float32 |         | -0.0569350        | 0.0453742        | -0.0089781     | 0.0008329             | torch.Size([11])                 |
| 2895    | torch.nn.modules.linear.Linear                                                    | head.layers.41.layers.10                          | output              | torch.float32 |         | -19.3429184       | 17.3389091       | 0.1635286      | 3.9960387             | torch.Size([2, 512, 11])         |
| 2896    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.11.scale_quant_stub         | input               | torch.float32 |         | 0.0074676         | 0.9877831        | 0.1357567      | 0.0805249             | torch.Size([11])                 |
| 2896    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.layers.11.scale_quant_stub         | output              | torch.float32 |         | 0.0074676         | 0.9877831        | 0.1357567      | 0.0805249             | torch.Size([11])                 |
| 2897    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.11.mul                      | input_0             | torch.float32 |         | -19.3429184       | 17.3389091       | 0.1635286      | 3.9960387             | torch.Size([2, 512, 11])         |
| 2897    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.11.mul                      | input_1             | torch.float32 |         | 0.0074676         | 0.9877831        | 0.1357567      | 0.0805249             | torch.Size([11])                 |
| 2897    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.layers.11.mul                      | output              | torch.float32 |         | -1.5988384        | 1.5398897        | -0.0025638     | 0.0282236             | torch.Size([2, 512, 11])         |
| 2898    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.add2                               | input_0             | torch.float32 |         | -1.5988384        | 1.5398897        | -0.0025638     | 0.0282236             | torch.Size([2, 512, 11])         |
| 2898    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.add2                               | input_1             | torch.float32 |         | -53.3880920       | 53.3906403       | 0.2613437      | 80.2549210            | torch.Size([2, 512, 11])         |
| 2898    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.add2                               | output              | torch.float32 |         | -53.3832436       | 53.3954773       | 0.2587798      | 80.3182755            | torch.Size([2, 512, 11])         |
| 2899    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.0                       | input               | torch.float32 |         | -4.2215729        | 4.6305103        | 0.0020757      | 0.7369147             | torch.Size([2, 512, 256])        |
| 2899    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.0                       | weight              | torch.float32 |         | -0.3916217        | 0.4025688        | -0.0007721     | 0.0074816             | torch.Size([256, 256])           |
| 2899    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.0                       | bias                | torch.float32 |         | -0.2124989        | 0.1511600        | -0.0473562     | 0.0046897             | torch.Size([256])                |
| 2899    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.0                       | output              | torch.float32 |         | -11.1721096       | 14.4997673       | -0.4630996     | 10.5331097            | torch.Size([2, 512, 256])        |
| 2900    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.cls_layers.1                       | input               | torch.float32 |         | 0.0000000         | 14.4997673       | 1.0765487      | 3.8210216             | torch.Size([2, 512, 256])        |
| 2900    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.cls_layers.1                       | output              | torch.float32 |         | 0.0000000         | 14.4997673       | 1.0765487      | 3.8210216             | torch.Size([2, 512, 256])        |
| 2901    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.input_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 14.4997673       | 1.0765487      | 3.8210216             | torch.Size([2, 512, 256])        |
| 2901    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.input_mean.mean       | output              | torch.float32 |         | 0.4115214         | 1.5727093        | 1.0765487      | 0.1109768             | torch.Size([2, 512, 1])          |
| 2902    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.2.sub                   | input_0             | torch.float32 |         | 0.0000000         | 14.4997673       | 1.0765487      | 3.8210216             | torch.Size([2, 512, 256])        |
| 2902    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.2.sub                   | input_1             | torch.float32 |         | 0.4115214         | 1.5727093        | 1.0765487      | 0.1109768             | torch.Size([2, 512, 1])          |
| 2902    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.2.sub                   | output              | torch.float32 |         | -1.5727093        | 13.3024302       | -0.0000000     | 3.7101526             | torch.Size([2, 512, 256])        |
| 2903    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.mul                   | input_0             | torch.float32 |         | -1.5727093        | 13.3024302       | -0.0000000     | 3.7101526             | torch.Size([2, 512, 256])        |
| 2903    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.mul                   | input_1             | torch.float32 |         | -1.5727093        | 13.3024302       | -0.0000000     | 3.7101526             | torch.Size([2, 512, 256])        |
| 2903    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.mul                   | output              | torch.float32 |         | 0.0000000         | 176.9546509      | 3.7101383      | 99.1361618            | torch.Size([2, 512, 256])        |
| 2904    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.var_mean.mean         | input_0             | torch.float32 |         | 0.0000000         | 176.9546509      | 3.7101383      | 99.1361618            | torch.Size([2, 512, 256])        |
| 2904    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.2.var_mean.mean         | output              | torch.float32 |         | 1.0339568         | 5.8485222        | 3.7101386      | 1.9387428             | torch.Size([2, 512, 1])          |
| 2905    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.cls_layers.2.rsqrt                 | input               | torch.float32 |         | 1.0339568         | 5.8485222        | 3.7101386      | 1.9387428             | torch.Size([2, 512, 1])          |
| 2905    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.cls_layers.2.rsqrt                 | output              | torch.float32 |         | 0.4135010         | 0.9834374        | 0.5495412      | 0.0124365             | torch.Size([2, 512, 1])          |
| 2906    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.out_mul               | input_0             | torch.float32 |         | -1.5727093        | 13.3024302       | -0.0000000     | 3.7101526             | torch.Size([2, 512, 256])        |
| 2906    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.out_mul               | input_1             | torch.float32 |         | 0.4135010         | 0.9834374        | 0.5495412      | 0.0124365             | torch.Size([2, 512, 1])          |
| 2906    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.out_mul               | output              | torch.float32 |         | -0.6627786        | 7.9016409        | -0.0000000     | 1.0000007             | torch.Size([2, 512, 256])        |
| 2907    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.2.weight_quant          | input               | torch.float32 |         | 0.7428278         | 1.2361827        | 0.9719122      | 0.0050141             | torch.Size([256])                |
| 2907    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.2.weight_quant          | output              | torch.float32 |         | 0.7428278         | 1.2361827        | 0.9719122      | 0.0050141             | torch.Size([256])                |
| 2908    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.weight_mul            | input_0             | torch.float32 |         | -0.6627786        | 7.9016409        | -0.0000000     | 1.0000007             | torch.Size([2, 512, 256])        |
| 2908    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.weight_mul            | input_1             | torch.float32 |         | 0.7428278         | 1.2361827        | 0.9719122      | 0.0050141             | torch.Size([256])                |
| 2908    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.2.weight_mul            | output              | torch.float32 |         | -0.7964117        | 9.7678719        | 0.0036016      | 0.9796404             | torch.Size([2, 512, 256])        |
| 2909    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.2.bias_quant            | input               | torch.float32 |         | -0.0868656        | 0.2186394        | 0.0415796      | 0.0023078             | torch.Size([256])                |
| 2909    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.2.bias_quant            | output              | torch.float32 |         | -0.0868656        | 0.2186394        | 0.0415796      | 0.0023078             | torch.Size([256])                |
| 2910    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.2.bias_add              | input_0             | torch.float32 |         | -0.7964117        | 9.7678719        | 0.0036016      | 0.9796404             | torch.Size([2, 512, 256])        |
| 2910    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.2.bias_add              | input_1             | torch.float32 |         | -0.0868656        | 0.2186394        | 0.0415796      | 0.0023078             | torch.Size([256])                |
| 2910    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.2.bias_add              | output              | torch.float32 |         | -0.7576593        | 9.9354248        | 0.0451812      | 0.9952122             | torch.Size([2, 512, 256])        |
| 2911    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.3                       | input               | torch.float32 |         | -0.7576593        | 9.9354248        | 0.0451812      | 0.9952122             | torch.Size([2, 512, 256])        |
| 2911    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.3                       | weight              | torch.float32 |         | -0.6531906        | 0.4522330        | 0.0064459      | 0.0071903             | torch.Size([256, 256])           |
| 2911    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.3                       | bias                | torch.float32 |         | -0.1963050        | 0.2913345        | -0.0591058     | 0.0040117             | torch.Size([256])                |
| 2911    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.3                       | output              | torch.float32 |         | -18.1156654       | 27.5464821       | -1.8706732     | 9.1163998             | torch.Size([2, 512, 256])        |
| 2912    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.cls_layers.4                       | input               | torch.float32 |         | 0.0000000         | 27.5464821       | 0.4599229      | 2.4794135             | torch.Size([2, 512, 256])        |
| 2912    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.cls_layers.4                       | output              | torch.float32 |         | 0.0000000         | 27.5464821       | 0.4599229      | 2.4794135             | torch.Size([2, 512, 256])        |
| 2913    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.input_mean.mean       | input_0             | torch.float32 |         | 0.0000000         | 27.5464821       | 0.4599229      | 2.4794135             | torch.Size([2, 512, 256])        |
| 2913    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.input_mean.mean       | output              | torch.float32 |         | 0.2357059         | 1.1633086        | 0.4599229      | 0.0066865             | torch.Size([2, 512, 1])          |
| 2914    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.5.sub                   | input_0             | torch.float32 |         | 0.0000000         | 27.5464821       | 0.4599229      | 2.4794135             | torch.Size([2, 512, 256])        |
| 2914    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.5.sub                   | input_1             | torch.float32 |         | 0.2357059         | 1.1633086        | 0.4599229      | 0.0066865             | torch.Size([2, 512, 1])          |
| 2914    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.cls_layers.5.sub                   | output              | torch.float32 |         | -1.1633086        | 27.2197895       | 0.0000000      | 2.4727335             | torch.Size([2, 512, 256])        |
| 2915    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.mul                   | input_0             | torch.float32 |         | -1.1633086        | 27.2197895       | 0.0000000      | 2.4727335             | torch.Size([2, 512, 256])        |
| 2915    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.mul                   | input_1             | torch.float32 |         | -1.1633086        | 27.2197895       | 0.0000000      | 2.4727335             | torch.Size([2, 512, 256])        |
| 2915    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.mul                   | output              | torch.float32 |         | 0.0000000         | 740.9169312      | 2.4727240      | 553.7733765           | torch.Size([2, 512, 256])        |
| 2916    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.var_mean.mean         | input_0             | torch.float32 |         | 0.0000000         | 740.9169312      | 2.4727240      | 553.7733765           | torch.Size([2, 512, 256])        |
| 2916    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.cls_layers.5.var_mean.mean         | output              | torch.float32 |         | 0.3956175         | 3.9020724        | 2.4727240      | 0.4910919             | torch.Size([2, 512, 1])          |
| 2917    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.cls_layers.5.rsqrt                 | input               | torch.float32 |         | 0.3956175         | 3.9020724        | 2.4727240      | 0.4910919             | torch.Size([2, 512, 1])          |
| 2917    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.cls_layers.5.rsqrt                 | output              | torch.float32 |         | 0.5062345         | 1.5898521        | 0.6603230      | 0.0143998             | torch.Size([2, 512, 1])          |
| 2918    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.out_mul               | input_0             | torch.float32 |         | -1.1633086        | 27.2197895       | 0.0000000      | 2.4727335             | torch.Size([2, 512, 256])        |
| 2918    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.out_mul               | input_1             | torch.float32 |         | 0.5062345         | 1.5898521        | 0.6603230      | 0.0143998             | torch.Size([2, 512, 1])          |
| 2918    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.out_mul               | output              | torch.float32 |         | -0.7256463        | 14.4662743       | 0.0000000      | 0.9999993             | torch.Size([2, 512, 256])        |
| 2919    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.5.weight_quant          | input               | torch.float32 |         | 0.5720253         | 0.9521823        | 0.8364800      | 0.0042872             | torch.Size([256])                |
| 2919    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.5.weight_quant          | output              | torch.float32 |         | 0.5720253         | 0.9521823        | 0.8364800      | 0.0042872             | torch.Size([256])                |
| 2920    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.weight_mul            | input_0             | torch.float32 |         | -0.7256463        | 14.4662743       | 0.0000000      | 0.9999993             | torch.Size([2, 512, 256])        |
| 2920    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.weight_mul            | input_1             | torch.float32 |         | 0.5720253         | 0.9521823        | 0.8364800      | 0.0042872             | torch.Size([256])                |
| 2920    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.cls_layers.5.weight_mul            | output              | torch.float32 |         | -0.6909476        | 13.0007477       | 0.0092809      | 0.7828901             | torch.Size([2, 512, 256])        |
| 2921    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.5.bias_quant            | input               | torch.float32 |         | -0.1434759        | 0.2099707        | 0.0936137      | 0.0056069             | torch.Size([256])                |
| 2921    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.cls_layers.5.bias_quant            | output              | torch.float32 |         | -0.1434759        | 0.2099707        | 0.0936137      | 0.0056069             | torch.Size([256])                |
| 2922    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.5.bias_add              | input_0             | torch.float32 |         | -0.6909476        | 13.0007477       | 0.0092809      | 0.7828901             | torch.Size([2, 512, 256])        |
| 2922    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.5.bias_add              | input_1             | torch.float32 |         | -0.1434759        | 0.2099707        | 0.0936137      | 0.0056069             | torch.Size([256])                |
| 2922    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.cls_layers.5.bias_add              | output              | torch.float32 |         | -0.7896047        | 12.9099331       | 0.1028946      | 0.7458686             | torch.Size([2, 512, 256])        |
| 2923    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.6                       | input               | torch.float32 |         | -0.7896047        | 12.9099331       | 0.1028946      | 0.7458686             | torch.Size([2, 512, 256])        |
| 2923    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.6                       | weight              | torch.float32 |         | -0.3821189        | 0.1957047        | -0.0082432     | 0.0038872             | torch.Size([10, 256])            |
| 2923    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.6                       | bias                | torch.float32 |         | -4.5506554        | -4.5029793       | -4.5237875     | 0.0002058             | torch.Size([10])                 |
| 2923    | torch.nn.modules.linear.Linear                                                    | head.layers.41.cls_layers.6                       | output              | torch.float32 |         | -8.3055687        | 2.6549125        | -5.1422715     | 1.8885295             | torch.Size([2, 512, 10])         |
| 2924    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.0                   | input               | torch.float32 |         | -4.7876415        | 8.8668385        | 0.0627999      | 1.4825734             | torch.Size([2, 512, 256])        |
| 2924    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.0                   | weight              | torch.float32 |         | -0.5681219        | 0.4727457        | 0.0007156      | 0.0080122             | torch.Size([256, 256])           |
| 2924    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.0                   | bias                | torch.float32 |         | -0.2011542        | 0.2002611        | -0.0506676     | 0.0076206             | torch.Size([256])                |
| 2924    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.0                   | output              | torch.float32 |         | -15.6362123       | 13.3609915       | -1.2334991     | 12.3931179            | torch.Size([2, 512, 256])        |
| 2925    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.quality_layers.1                   | input               | torch.float32 |         | 0.0000000         | 13.3609915       | 0.8911880      | 3.0744274             | torch.Size([2, 512, 256])        |
| 2925    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.quality_layers.1                   | output              | torch.float32 |         | 0.0000000         | 13.3609915       | 0.8911880      | 3.0744274             | torch.Size([2, 512, 256])        |
| 2926    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 13.3609915       | 0.8911880      | 3.0744274             | torch.Size([2, 512, 256])        |
| 2926    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.input_mean.mean   | output              | torch.float32 |         | 0.5583236         | 1.5134070        | 0.8911881      | 0.0276932             | torch.Size([2, 512, 1])          |
| 2927    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.2.sub               | input_0             | torch.float32 |         | 0.0000000         | 13.3609915       | 0.8911880      | 3.0744274             | torch.Size([2, 512, 256])        |
| 2927    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.2.sub               | input_1             | torch.float32 |         | 0.5583236         | 1.5134070        | 0.8911881      | 0.0276932             | torch.Size([2, 512, 1])          |
| 2927    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.2.sub               | output              | torch.float32 |         | -1.5134070        | 12.3718290       | 0.0000000      | 3.0467613             | torch.Size([2, 512, 256])        |
| 2928    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.mul               | input_0             | torch.float32 |         | -1.5134070        | 12.3718290       | 0.0000000      | 3.0467613             | torch.Size([2, 512, 256])        |
| 2928    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.mul               | input_1             | torch.float32 |         | -1.5134070        | 12.3718290       | 0.0000000      | 3.0467613             | torch.Size([2, 512, 256])        |
| 2928    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.mul               | output              | torch.float32 |         | 0.0000000         | 153.0621490      | 3.0467496      | 73.4398804            | torch.Size([2, 512, 256])        |
| 2929    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.var_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 153.0621490      | 3.0467496      | 73.4398804            | torch.Size([2, 512, 256])        |
| 2929    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.2.var_mean.mean     | output              | torch.float32 |         | 1.4728477         | 6.1737223        | 3.0467496      | 0.7892295             | torch.Size([2, 512, 1])          |
| 2930    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.quality_layers.2.rsqrt             | input               | torch.float32 |         | 1.4728477         | 6.1737223        | 3.0467496      | 0.7892295             | torch.Size([2, 512, 1])          |
| 2930    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.quality_layers.2.rsqrt             | output              | torch.float32 |         | 0.4024631         | 0.8239856        | 0.5895336      | 0.0061346             | torch.Size([2, 512, 1])          |
| 2931    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.out_mul           | input_0             | torch.float32 |         | -1.5134070        | 12.3718290       | 0.0000000      | 3.0467613             | torch.Size([2, 512, 256])        |
| 2931    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.out_mul           | input_1             | torch.float32 |         | 0.4024631         | 0.8239856        | 0.5895336      | 0.0061346             | torch.Size([2, 512, 1])          |
| 2931    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.out_mul           | output              | torch.float32 |         | -0.6690149        | 7.0555997        | 0.0000000      | 1.0000002             | torch.Size([2, 512, 256])        |
| 2932    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.2.weight_quant      | input               | torch.float32 |         | 0.7529514         | 1.2044538        | 0.9968498      | 0.0071440             | torch.Size([256])                |
| 2932    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.2.weight_quant      | output              | torch.float32 |         | 0.7529514         | 1.2044538        | 0.9968498      | 0.0071440             | torch.Size([256])                |
| 2933    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.weight_mul        | input_0             | torch.float32 |         | -0.6690149        | 7.0555997        | 0.0000000      | 1.0000002             | torch.Size([2, 512, 256])        |
| 2933    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.weight_mul        | input_1             | torch.float32 |         | 0.7529514         | 1.2044538        | 0.9968498      | 0.0071440             | torch.Size([256])                |
| 2933    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.2.weight_mul        | output              | torch.float32 |         | -0.8057975        | 7.4436445        | -0.0081492     | 0.9889711             | torch.Size([2, 512, 256])        |
| 2934    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.2.bias_quant        | input               | torch.float32 |         | -0.1380954        | 0.2172861        | 0.0049230      | 0.0046242             | torch.Size([256])                |
| 2934    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.2.bias_quant        | output              | torch.float32 |         | -0.1380954        | 0.2172861        | 0.0049230      | 0.0046242             | torch.Size([256])                |
| 2935    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.2.bias_add          | input_0             | torch.float32 |         | -0.8057975        | 7.4436445        | -0.0081492     | 0.9889711             | torch.Size([2, 512, 256])        |
| 2935    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.2.bias_add          | input_1             | torch.float32 |         | -0.1380954        | 0.2172861        | 0.0049230      | 0.0046242             | torch.Size([256])                |
| 2935    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.2.bias_add          | output              | torch.float32 |         | -0.9036791        | 7.4768367        | -0.0032262     | 1.0371442             | torch.Size([2, 512, 256])        |
| 2936    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.3                   | input               | torch.float32 |         | -0.9036791        | 7.4768367        | -0.0032262     | 1.0371442             | torch.Size([2, 512, 256])        |
| 2936    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.3                   | weight              | torch.float32 |         | -0.5449315        | 0.4749622        | 0.0150954      | 0.0048535             | torch.Size([256, 256])           |
| 2936    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.3                   | bias                | torch.float32 |         | -0.1342729        | 0.3925043        | -0.0479803     | 0.0025327             | torch.Size([256])                |
| 2936    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.3                   | output              | torch.float32 |         | -11.5700045       | 47.6264610       | -2.7684574     | 9.8133888             | torch.Size([2, 512, 256])        |
| 2937    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.quality_layers.4                   | input               | torch.float32 |         | 0.0000000         | 47.6264610       | 0.3034196      | 5.0848413             | torch.Size([2, 512, 256])        |
| 2937    | torch.nn.modules.activation.ReLU                                                  | head.layers.41.quality_layers.4                   | output              | torch.float32 |         | 0.0000000         | 47.6264610       | 0.3034196      | 5.0848413             | torch.Size([2, 512, 256])        |
| 2938    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.input_mean.mean   | input_0             | torch.float32 |         | 0.0000000         | 47.6264610       | 0.3034196      | 5.0848413             | torch.Size([2, 512, 256])        |
| 2938    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.input_mean.mean   | output              | torch.float32 |         | 0.2181975         | 0.4794174        | 0.3034196      | 0.0030615             | torch.Size([2, 512, 1])          |
| 2939    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.5.sub               | input_0             | torch.float32 |         | 0.0000000         | 47.6264610       | 0.3034196      | 5.0848413             | torch.Size([2, 512, 256])        |
| 2939    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.5.sub               | input_1             | torch.float32 |         | 0.2181975         | 0.4794174        | 0.3034196      | 0.0030615             | torch.Size([2, 512, 1])          |
| 2939    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.sub        | head.layers.41.quality_layers.5.sub               | output              | torch.float32 |         | -0.4794174        | 47.3119431       | 0.0000000      | 5.0817823             | torch.Size([2, 512, 256])        |
| 2940    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.mul               | input_0             | torch.float32 |         | -0.4794174        | 47.3119431       | 0.0000000      | 5.0817823             | torch.Size([2, 512, 256])        |
| 2940    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.mul               | input_1             | torch.float32 |         | -0.4794174        | 47.3119431       | 0.0000000      | 5.0817823             | torch.Size([2, 512, 256])        |
| 2940    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.mul               | output              | torch.float32 |         | 0.0000000         | 2238.4199219     | 5.0817633      | 5025.5825195          | torch.Size([2, 512, 256])        |
| 2941    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.var_mean.mean     | input_0             | torch.float32 |         | 0.0000000         | 2238.4199219     | 5.0817633      | 5025.5825195          | torch.Size([2, 512, 256])        |
| 2941    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mean       | head.layers.41.quality_layers.5.var_mean.mean     | output              | torch.float32 |         | 1.4862461         | 9.6450329        | 5.0817628      | 2.4050744             | torch.Size([2, 512, 1])          |
| 2942    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.quality_layers.5.rsqrt             | input               | torch.float32 |         | 1.4862461         | 9.6450329        | 5.0817628      | 2.4050744             | torch.Size([2, 512, 1])          |
| 2942    | horizon_plugin_pytorch.nn.rsqrt.Rsqrt                                             | head.layers.41.quality_layers.5.rsqrt             | output              | torch.float32 |         | 0.3219941         | 0.8202631        | 0.4618582      | 0.0066937             | torch.Size([2, 512, 1])          |
| 2943    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.out_mul           | input_0             | torch.float32 |         | -0.4794174        | 47.3119431       | 0.0000000      | 5.0817823             | torch.Size([2, 512, 256])        |
| 2943    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.out_mul           | input_1             | torch.float32 |         | 0.3219941         | 0.8202631        | 0.4618582      | 0.0066937             | torch.Size([2, 512, 1])          |
| 2943    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.out_mul           | output              | torch.float32 |         | -0.3217948        | 15.2912931       | 0.0000000      | 1.0000015             | torch.Size([2, 512, 256])        |
| 2944    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.5.weight_quant      | input               | torch.float32 |         | 0.4071644         | 0.9784095        | 0.7547790      | 0.0145837             | torch.Size([256])                |
| 2944    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.5.weight_quant      | output              | torch.float32 |         | 0.4071644         | 0.9784095        | 0.7547790      | 0.0145837             | torch.Size([256])                |
| 2945    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.weight_mul        | input_0             | torch.float32 |         | -0.3217948        | 15.2912931       | 0.0000000      | 1.0000015             | torch.Size([2, 512, 256])        |
| 2945    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.weight_mul        | input_1             | torch.float32 |         | 0.4071644         | 0.9784095        | 0.7547790      | 0.0145837             | torch.Size([256])                |
| 2945    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.mul        | head.layers.41.quality_layers.5.weight_mul        | output              | torch.float32 |         | -0.3148471        | 9.1513462        | -0.0039911     | 0.4084769             | torch.Size([2, 512, 256])        |
| 2946    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.5.bias_quant        | input               | torch.float32 |         | -0.5791797        | 0.1132794        | 0.0721813      | 0.0038805             | torch.Size([256])                |
| 2946    | horizon_plugin_pytorch.quantization.stubs.QuantStub                               | head.layers.41.quality_layers.5.bias_quant        | output              | torch.float32 |         | -0.5791797        | 0.1132794        | 0.0721813      | 0.0038805             | torch.Size([256])                |
| 2947    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.5.bias_add          | input_0             | torch.float32 |         | -0.3148471        | 9.1513462        | -0.0039911     | 0.4084769             | torch.Size([2, 512, 256])        |
| 2947    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.5.bias_add          | input_1             | torch.float32 |         | -0.5791797        | 0.1132794        | 0.0721813      | 0.0038805             | torch.Size([256])                |
| 2947    | horizon_plugin_pytorch.nn.quantized.functional_modules.FloatFunctional.add        | head.layers.41.quality_layers.5.bias_add          | output              | torch.float32 |         | -0.6840384        | 8.7713776        | 0.0681901      | 0.3630659             | torch.Size([2, 512, 256])        |
| 2948    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.6                   | input               | torch.float32 |         | -0.6840384        | 8.7713776        | 0.0681901      | 0.3630659             | torch.Size([2, 512, 256])        |
| 2948    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.6                   | weight              | torch.float32 |         | -0.1633572        | 0.1557941        | -0.0001491     | 0.0013779             | torch.Size([2, 256])             |
| 2948    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.6                   | bias                | torch.float32 |         | 0.0361053         | 0.0646671        | 0.0503862      | 0.0004079             | torch.Size([2])                  |
| 2948    | torch.nn.modules.linear.Linear                                                    | head.layers.41.quality_layers.6                   | output              | torch.float32 |         | -2.6437218        | 4.9895453        | 0.1422100      | 0.9439211             | torch.Size([2, 512, 2])          |
| 2949    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(5)                                   | input               | torch.float32 |         | -53.3832436       | 53.3954773       | 0.2587798      | 80.3182755            | torch.Size([2, 512, 11])         |
| 2949    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(5)                                   | output              | torch.float32 |         | -53.3832436       | 53.3954773       | 0.2587798      | 80.3182755            | torch.Size([2, 512, 11])         |
| 2950    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(6)                                   | input               | torch.float32 |         | -8.3055687        | 2.6549125        | -5.1422715     | 1.8885295             | torch.Size([2, 512, 10])         |
| 2950    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(6)                                   | output              | torch.float32 |         | -8.3055687        | 2.6549125        | -5.1422715     | 1.8885295             | torch.Size([2, 512, 10])         |
| 2951    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(7)                                   | input               | torch.float32 |         | -2.6437218        | 4.9895453        | 0.1422100      | 0.9439211             | torch.Size([2, 512, 2])          |
| 2951    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(7)                                   | output              | torch.float32 |         | -2.6437218        | 4.9895453        | 0.1422100      | 0.9439211             | torch.Size([2, 512, 2])          |
| 2952    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(8)                                   | input               | torch.float32 |         | -8.3055687        | 2.6549125        | -5.1422715     | 1.8885295             | torch.Size([2, 512, 10])         |
| 2952    | torch.ao.quantization.stubs.DeQuantStub                                           | head.dequant(8)                                   | output              | torch.float32 |         | -8.3055687        | 2.6549125        | -5.1422715     | 1.8885295             | torch.Size([2, 512, 10])         |
+---------+-----------------------------------------------------------------------------------+---------------------------------------------------+---------------------+---------------+---------+-------------------+------------------+----------------+-----------------------+----------------------------------+