| ONNX Operator Name | HMCT convert description | PTQ ONNX Operator | Map Description & Graph Fusion Description | HBIR Operator Name | BPU Support Constraints |
|---|---|---|---|---|---|
| Abs | The operator is converted into a look-up table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Acos | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Acosh | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Add | none | Add | //opset 9 func.func @Add(...) { return hbir.add(...) } | hbir.add | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs |
| And | none | And | //opset 9 func.func @And(...) { return hbir.logical_and(...) } | hbir.logical_and | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| ArgMax | none | ArgMax | //opset 9 func.func @ArgMax(%x, %axis=0, %keepdims=1) { hbir.reduce_argmax(%x, dims=[%axis], keepDim=bool(%keepdims)) | hbir.reduce_argmax | input: Type: int8, int16 Shape: [*] output: Same as input, but output can be of type int32 or int64, as long as the size of the reduced axis can be represented using an int16 number |
| ArgMin | none | ArgMin | //opset 9 func.func @ArgMin(%x, %axis=0, %keepdims=1) { hbir.reduce_argmin(%x, dims=[%axis], keepDim=bool(%keepdims)) | hbir.reduce_argmin | TBD |
| Asin | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Asinh | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Atan | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Atanh | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| AveragePool | none | AveragePool | //opset 9 func.func @AveragePool(%x, %kernel_shape,%pads, %strides, %auto_pad="NOTSET", %ceil_mode=0) { %0 = hbir.transpose(%x, [0, 2, 3, 1]) %1 = hbir.avg_pool2d(%0, kernel_shape, strides, pads, bool(ceil_mode)) %2 = hbir.transpose(%1, [0, 3, 1, 2]) return %2 } | hbir.avg_pool2d | input: Type: int8, int16 Shape: [*,H,W,C] output: Same as input kernel: Shape: [KH,KW] Dim: KH, KW ∈ [1, 256] stride: Shape: [SH,SW] Dim: SH, SW ∈ [1, 256] pad: Shape: [PN,PH,PW,PC] PN,PH,PW,PC ∈ [-3, 256] |
| hbir.transpose | inputs: no limits output: same as inputs | ||||
| BatchNormalization | The operator is deleted/fused/folded | None | fully supported | ||
| BitShift | none | BitShift | unsupported | ||
| BitwiseAnd | none | BitwiseAnd | unsupported | ||
| BitwiseNot | none | BitwiseNot | unsupported | ||
| BitwiseOr | none | BitwiseOr | unsupported | ||
| BitwiseXor | none | BitwiseXor | unsupported | ||
| Cast | none | Cast | //opset 9 func.func @Cast(...) { return hbir.cast(...) } | hbir.cast | TBD |
| Ceil | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Col2Im | none | Col2Im | unsupported | ||
| Compress | none | Compress | unsupported | ||
| Concat | none | Concat | //opset 9 func.func @Concat(%*args, %axis) { return hbir.concat(%args, dim=%axis) } | hbir.concat | inputs: Arg Number: inputs number ∈ [1, 1024] Dim: all dims < 131072 size < 2G output: same as inputs |
| ConcatFromSequence | none | ConcatFromSequence | unsupported | ||
| Constant | The operator is deleted/fused/folded | None | fully supported | ||
| ConstantOfShape | The operator is deleted/fused/folded | None | fully supported | ||
| Conv | none | Conv | //opset 9 func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) { %0 = hbir.transpose(%x, [0, 2, 3, 1]) %1 = hbir.transpose(%w, [0, 2, 3, 1]) %2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b) %3 = hbir.transpose(%2, [0, 3, 1, 2]) return %3 } | hbir.transpose | inputs: no limits output: same as inputs |
| hbir.conv2d | input: Type: int8, int16; input and weight cannot both be int16 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] weight: Type: int8, int16; input and weight cannot both be int16 Shape: [N,KH,KW,C] Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KH\*KW\*C ∈ [1, 65536] bias: Type: f32 output: Type: int8, int16, int32 Other constraints: same as fin stride: Shape: [SH,SW] Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1 pad: Shape: [PN,PH,PW,PC] Dim: PN,PH,PW,PC ∈ [-1, 256] groupNum: fin.c is divisible by group number dilation: Shape: [DH,DW] Dim: DH,DW ∈ [1, 18] others: Don't support even stride when conv is a depthwise conv For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1 | ||||
| ConvInteger | none | ConvInteger | unsupported | ||
| ConvTranspose | none | ConvTranspose | //opset 9 func.func @ConvTranspose(%x, %w, %b, %auto_pad="NOTSET", %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) { %0 = hbir.transpose(%x, [0, 2, 3, 1]) %1 = hbir.transpose(%w, [0, 2, 3, 1]) %2 = hbir.conv2dtranspose(%0, %1, %strides, %pads, %dilations, %group, bias=%b, illegalweight=True) %3 = hbir.transpose(%2, [0, 3, 1, 2]) return %3 } | hbir.transpose | inputs: no limits output: same as inputs |
| hbir.conv2dtranspose | input: Type: int8, int16; input and weight cannot both be int16 Shape: [*,H,W,C] Dim: * ∈ [1, 128]; H,W ∈ [1, 65536]; C ∈ [1, 2048] weight: Type: int8, int16; input and weight cannot both be int16 Shape: [N,KH,KW,C] Dim: N,C ∈ [1, 2048]; KH,KW ∈ [1, 14] Size: KH\*KW\*C ∈ [1, 65536] bias: Type: f32 output: Same as input, the type additionally supports int32 stride: Shape: [SH,SW] Dim: SH,SW ∈ [1, 14]; SH < KH; SW < KW; pad: Shape: [PN,PH,PW,PC] Dim: PN,PH,PW,PC ∈ [0, 256] dilation: Shape: [DH,DW] Dim: DH,DW ∈ {1} | ||||
| Cos | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Cosh | none | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| CumSum | none | CumSum | unsupported | ||
| DFT | none | DFT | unsupported | ||
| DepthToSpace | none | DepthToSpace | unsupported | ||
| DequantizeLinear | none | DequantizeLinear | //opset 9 func.func @DequantizeLinear(%x, %x_scale, %x_zero_point){ return qnt.dequantize(%x, scales=%x_scale, zeros=%x_zero_point) } | qnt.dequantize | inputs: no limits outputs: no limits |
| Det | none | Det | unsupported | ||
| Div | The operator is splited | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Mul | //opset 9 func.func @Mul(...) { return hbir.mul(...) } | hbir.mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Dropout | The operator is deleted/fused/folded | Dropout | unsupported | ||
| Einsum | none | Einsum | unsupported | ||
| Equal | none | Equal | //opset 9 func.func @Equal(...) { return hbir.equal(...) } | hbir.equal | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 |
| Erf | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Exp | none | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Expand | none | Expand | //opset 9 func.func @Expand(%data, %shape) { // %original_shape和%repeat_list由%data.shape和%shape计算得出 %0 = hbir.reshape(%data, %original_shape) %1 = hbir.tile(%data, %repeat_list) return %1 } | hbir.reshape | inputs: no limits output: same as inputs |
| hbir.tile | inputs: no limits output: same as inputs | ||||
| EyeLike | none | EyeLike | unsupported | ||
| Flatten | The operator is replaced | Reshape | //opset 9 func.func @Reshape(...) { return hbir.reshape(...) } | hbir.reshape | inputs: no limits output: same as inputs |
| Floor | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| GRU | none | GRU | unsupported | ||
| Gather | none | Gather | //opset 9 func.func @Gather(%x, %indices, %axis=0) { return hbir.index(%x, index=%indices, dim=%axis) } | hbir.index | TBD |
| GatherElements | none | GatherElements | //opset 9 func.func @GatherElements(%data, %indices, %axis=0) { return hbir.gather(%data, %indices, dim=%axis) } | hbir.gather | Unsupported |
| GatherND | none | GatherND | //opset 9 func.func @GatherND(%data, %indices, %dims=0) { return hbir.gather_nd(%data, %indices, dim=np.array(%dims)) } | hbir.gather_nd | TBD |
| Gemm | The operator is replaced | Conv | //opset 9 func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) { %0 = hbir.transpose(%x, [0, 2, 3, 1]) %1 = hbir.transpose(%w, [0, 2, 3, 1]) %2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b) %3 = hbir.transpose(%2, [0, 3, 1, 2]) return %3 } | hbir.transpose | inputs: no limits output: same as inputs |
| hbir.conv2d | input: Type: int8, int16; input and weight cannot both be int16 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] weight: Type: int8, int16; input and weight cannot both be int16 Shape: [N,KH,KW,C] Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KH\*KW\*C ∈ [1, 65536] bias: Type: f32 output: Type: int8, int16, int32 Other constraints: same as fin stride: Shape: [SH,SW] Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1 pad: Shape: [PN,PH,PW,PC] Dim: PN,PH,PW,PC ∈ [-1, 256] groupNum: fin.c is divisible by group number dilation: Shape: [DH,DW] Dim: DH,DW ∈ [1, 18] others: Don't support even stride when conv is a depthwise conv For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1 | ||||
| GlobalAveragePool | none | GlobalAveragePool | //opset 9 func.func @GlobalAveragePool(%x) { hbir.reduce_mean(%x, dims=[-2, -1], keepDim=True) } | hbir.reduce_mean | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input |
| GlobalLpPool | none | GlobalLpPool | unsupported | ||
| GlobalMaxPool | none | GlobalMaxPool | //opset 9 func.func @GlobalMaxPool(%x) { hbir.reduce_max(%x, dims=[-2, -1], keepDim=True) } | hbir.reduce_max | input: Type: int8, int16 Shape: [*] output: Same as input |
| Greater | none | Greater | //opset 9 func.func @Greater(...) { return hbir.greater(...) } | hbir.greater | TBD |
| GridSample | none | GridSample | unsupported | ||
| Hardmax | none | Hardmax | unsupported | ||
| Identity | The operator is deleted/fused/folded | Identity | unsupported | ||
| If | The operator is deleted/fused/folded | None | fully supported | ||
| ImageDecoder | none | ImageDecoder | unsupported | ||
| InstanceNormalization | The operator is splited | ReduceMean | //opset 9 func.func @ReduceMean(%x, %axes, %keepdims=1) { hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_mean | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input |
| Sub | //opset 9 func.func @Sub(...) { return hbir.sub(...) } | hbir.sub | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Mul | //opset 9 func.func @Mul(...) { return hbir.mul(...) } | hbir.mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Add | //opset 9 func.func @Add(...) { return hbir.add(...) } | hbir.add | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 | ||
| IsInf | none | IsInf | unsupported | ||
| IsNaN | none | IsNaN | unsupported | ||
| LRN | none | LRN | unsupported | ||
| LSTM | The operator is splited | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Split | //opset 9 func.func @Split(%x, %axis=0, %split) { ret_list = [] for i in range(len(%split)): // %begin和%end由%axis,%split和%x.shape计算得出, %step由%x.shape计算得出 ret_list.append(hbir.slice(%x, begin=%begin, end=%end, step=%step)) return ret_list } | hbir.slice | input: Dim: all dims < 2097152 output: Same as input | ||
| Mul | //opset 9 func.func @Mul(...) { return hbir.mul(...) } | hbir.mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Concat | //opset 9 func.func @Concat(%*args, %axis) { return hbir.concat(%args, dim=%axis) } | hbir.concat | inputs: Arg Number: inputs number ∈ [1, 1024] Dim: all dims < 131072 size < 2G output: same as inputs | ||
| Transpose | //opset 9 func.func @Transpose(...) { return hbir.transpose(...) } | hbir.transpose | inputs: no limits output: same as inputs | ||
| Add | //opset 9 func.func @Add(...) { return hbir.add(...) } | hbir.add | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Conv | //opset 9 func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) { %0 = hbir.transpose(%x, [0, 2, 3, 1]) %1 = hbir.transpose(%w, [0, 2, 3, 1]) %2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b) %3 = hbir.transpose(%2, [0, 3, 1, 2]) return %3 } | hbir.transpose | inputs: no limits output: same as inputs | ||
| hbir.conv2d | input: Type: int8, int16; input and weight cannot both be int16 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] weight: Type: int8, int16; input and weight cannot both be int16 Shape: [N,KH,KW,C] Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KH\*KW\*C ∈ [1, 65536] bias: Type: f32 output: Type: int8, int16, int32 Other constraints: same as fin stride: Shape: [SH,SW] Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1 pad: Shape: [PN,PH,PW,PC] Dim: PN,PH,PW,PC ∈ [-1, 256] groupNum: fin.c is divisible by group number dilation: Shape: [DH,DW] Dim: DH,DW ∈ [1, 18] others: Don't support even stride when conv is a depthwise conv For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1 | ||||
| Less | none | Less | //opset 9 func.func @Less(...) { return hbir.less(...) } | hbir.less | TBD |
| Log | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Loop | none | Loop | unsupported | ||
| LpNormalization | none | LpNormalization | unsupported | ||
| LpPool | none | LpPool | unsupported | ||
| MatMul | The operator is replaced to Conv when it's input is a constant value | MatMul | //opset 9 func.func @MatMul(...) { return hbir.matmul(...) } | hbir.matmul | lhs: Type: int8, int16; lhs and rhs cannot both be int16 Shape: [*,M,C] Dim: * ∈ [1, 4096], M,C ∈ [1, 8192] rhs: Type: int8, int16; lhs and rhs cannot both be int16 Shape: [*,C,N] Dim: * ∈ [1, 4096]; C,N ∈ [1, 8192] output: Type: int8, int16, int32 Shape: [*,M,N] Other constraints: same as lhs |
| Conv | //opset 9 func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) { %0 = hbir.transpose(%x, [0, 2, 3, 1]) %1 = hbir.transpose(%w, [0, 2, 3, 1]) %2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b) %3 = hbir.transpose(%2, [0, 3, 1, 2]) return %3 } | hbir.transpose | inputs: no limits output: same as inputs | ||
| hbir.conv2d | input: Type: int8, int16; input and weight cannot both be int16 Shape: [*,H,W,C] Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536] weight: Type: int8, int16; input and weight cannot both be int16 Shape: [N,KH,KW,C] Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192] Size: KH\*KW\*C ∈ [1, 65536] bias: Type: f32 output: Type: int8, int16, int32 Other constraints: same as fin stride: Shape: [SH,SW] Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1 pad: Shape: [PN,PH,PW,PC] Dim: PN,PH,PW,PC ∈ [-1, 256] groupNum: fin.c is divisible by group number dilation: Shape: [DH,DW] Dim: DH,DW ∈ [1, 18] others: Don't support even stride when conv is a depthwise conv For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1 | ||||
| MatMulInteger | none | MatMulInteger | unsupported | ||
| Max | The operator is splited | Max | //opset 9 func.func @Max(...) { return hbir.max(...) } | hbir.max | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| MaxPool | none | MaxPool | unsupported | ||
| MaxRoiPool | none | MaxRoiPool | unsupported | ||
| MaxUnpool | none | MaxUnpool | unsupported | ||
| Mean | none | Mean | unsupported | ||
| MelWeightMatrix | none | MelWeightMatrix | unsupported | ||
| Min | The operator is splited | Min | //opset 9 func.func @Min(...) { return hbir.min(...) } | hbir.min | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| Mod | none | Mod | unsupported | ||
| Mul | none | Mul | //opset 9 func.func @Mul(...) { return hbir.mul(...) } | hbir.mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| Multinomial | none | Multinomial | unsupported | ||
| Neg | none | Neg | unsupported | ||
| NonMaxSuppression | none | NonMaxSuppression | unsupported | ||
| NonZero | none | NonZero | unsupported | ||
| Not | none | Not | unsupported | ||
| OneHot | none | OneHot | None | None | None |
| Optional | none | Optional | unsupported | ||
| OptionalGetElement | none | OptionalGetElement | unsupported | ||
| OptionalHasElement | none | OptionalHasElement | unsupported | ||
| Or | none | Or | unsupported | ||
| Pad | The operator is deleted/fused/folded | None | fully supported | ||
| Pow | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| QLinearConv | none | QLinearConv | unsupported | ||
| QLinearMatMul | none | QLinearMatMul | unsupported | ||
| QuantizeLinear | The operator is deleted/fused/folded | QuantizeLinear | unsupported | ||
| RNN | none | RNN | unsupported | ||
| RandomNormal | The operator is deleted/fused/folded | RandomNormal | unsupported | ||
| RandomNormalLike | The operator is deleted/fused/folded | RandomNormalLike | unsupported | ||
| RandomUniform | The operator is deleted/fused/folded | RandomUniform | None | None | None |
| RandomUniformLike | The operator is deleted/fused/folded | RandomUniformLike | None | None | None |
| Reciprocal | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| ReduceMax | none | ReduceMax | //opset 9 func.func @ReduceMax(%x, %axes, %keepdims=1) { hbir.reduce_max(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_max | input: Type: int8, int16 Shape: [*] output: Same as input |
| ReduceMean | none | ReduceMean | //opset 9 func.func @ReduceMean(%x, %axes, %keepdims=1) { hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_mean | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input |
| ReduceMin | none | ReduceMin | unsupported | ||
| ReduceProd | none | ReduceProd | unsupported | ||
| ReduceSum | none | ReduceSum | //opset 9 func.func @ReduceSum(%x, %axes, %keepdims=1) { hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_sum | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input |
| RegexFullMatch | none | RegexFullMatch | unsupported | ||
| Reshape | none | Reshape | //opset 9 func.func @Reshape(...) { return hbir.reshape(...) } | hbir.reshape | inputs: no limits output: same as inputs |
| Resize | none | Resize | //opset 10 func.func @Resize(%x, %scales, %mode="nearest") { %0 = hbir.transpose(%x, [0, 2, 3, 1]) // %output_shape由%x,%scales,%mode计算得出 %1 = hbir.resize2d(%0, %step, np.array([-0.5, -0.5]), %mode, size=%output_shape[2:], expansionMode="border") %2 = hbir.transpose(%1, [0, 3, 1, 2]) return 2% } //opset 11 func.func @Resize(%x, roi, scales, sizes, coordinate_transformation_mode="half_pixel", cubic_coeff_a="-0.75", exclude_outside=0, extrapolation_value=0, mode="nearest", nearest_mode="round_prefer_floor") { %0 = hbir.transpose(%x, [0, 2, 3, 1]) // %initial_offset,%mode,%output_shape由输入参数计算得出 %1 = hbir.resize2d(%0, %step, %initial_offset, %mode, size=%output_shape[2:], expansionMode="border") %2 = hbir.transpose(%1, [0, 3, 1, 2]) return 2% } | hbir.transpose | inputs: no limits output: same as inputs |
| hbir.resize2d | input: Type: int8 Shape: [*,H,W,C] output: Same as input | ||||
| ReverseSequence | none | ReverseSequence | unsupported | ||
| RoiAlign | none | RoiAlign | unsupported | ||
| Round | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| STFT | none | STFT | unsupported | ||
| Scan | none | Scan | unsupported | ||
| Scatter (deprecated) | none | Scatter (deprecated) | unsupported | ||
| ScatterElements | none | ScatterElements | unsupported | ||
| ScatterND | none | ScatterND | //opset 9 func.func @ScatterND(...) { hbir.scatter_nd(...) } | hbir.scatter_nd | TBD |
| SequenceAt | none | SequenceAt | unsupported | ||
| SequenceConstruct | none | SequenceConstruct | unsupported | ||
| SequenceEmpty | none | SequenceEmpty | unsupported | ||
| SequenceErase | none | SequenceErase | unsupported | ||
| SequenceInsert | none | SequenceInsert | unsupported | ||
| SequenceLength | none | SequenceLength | unsupported | ||
| Shape | The operator is deleted/fused/folded | none | unsupported | ||
| Sigmoid | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Sign | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Sin | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Sinh | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Size | The operator is deleted/fused/folded | None | fully supported | ||
| Slice | none | Slice | //opset 9 func.func @Slice(%x, %starts, %ends, %axes=None) { // %new_starts和%new_ends由%starts, %ends和%axes计算得出 return hbir.slice(%x, begin=%new_starts, end=%new_ends, step=1) } //opset 10 func.func @Slice(%x, %starts, %ends, %*args) { // %new_starts, %new_ends和%steps由%starts, %ends和%*args计算得出 return hbir.slice(%x, begin=%new_starts, end=%new_ends, step=%steps) } | hbir.slice | input: Dim: all dims < 2097152 output: Same as input |
| SpaceToDepth | none | SpaceToDepth | //opset 9 func.func @SpaceToDepth(%x, %blocksize) { // %n, %c, %h, %w 由x.shape得出 %0 = hbir.reshape(%x, (%n, %c, %h // %blocksize, %blocksize, %w // %blocksize, %blocksize)) %1 = hbir.transpose(%0, [0, 3, 5, 1, 2, 4]) %2 = hbir.reshape(%1, (%n, %c * (%blocksize * %blocksize), %h // %blocksize, %w // %blocksize)) return %2 } | hbir.reshape | inputs: no limits output: same as inputs |
| hbir.transpose | inputs: no limits output: same as inputs | ||||
| Split | none | Split | //opset 9 func.func @Split(%x, %axis=0, %split) { ret_list = [] for i in range(len(%split)): // %begin和%end由%axis,%split和%x.shape计算得出, %step由%x.shape计算得出 ret_list.append(hbir.slice(%x, begin=%begin, end=%end, step=%step)) return ret_list } | hbir.slice | input: Dim: all dims < 2097152 output: Same as input |
| SplitToSequence | none | SplitToSequence | unsupported | ||
| Sqrt | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Squeeze | The operator is replaced | Reshape | //opset 9 func.func @Reshape(...) { return hbir.reshape(...) } | hbir.reshape | inputs: no limits output: same as inputs |
| StringConcat | none | StringConcat | unsupported | ||
| StringNormalizer | none | StringNormalizer | unsupported | ||
| StringSplit | none | StringSplit | unsupported | ||
| Sub | none | Sub | //opset 9 func.func @Sub(...) { return hbir.sub(...) } | hbir.sub | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| Sum | The operator is replaced | Add | //opset 9 func.func @Add(...) { return hbir.add(...) } | hbir.add | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs |
| Tan | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Tanh | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| TfIdfVectorizer | none | TfIdfVectorizer | unsupported | ||
| Tile | none | Tile | //opset 9 func.func @Tile(...) { return hbir.tile(...) } | hbir.tile | inputs: no limits output: same as inputs |
| TopK | none | TopK | //opset 9,10 func.func @TopK(%data, %k, %axis) { return hbir.topk(%data, k=%k, dim=%axis, largest=bool(1), sorted=bool(1)) } //opset 11 func.func @TopK(%data, %k, %axis, %largest=1, %sorted=1) { return hbir.topk(%data, k=%k, dim=%axis, largest=bool(largest), sorted=bool(sorted)) } | hbir.topk | Unsupported |
| Transpose | none | Transpose | //opset 9 func.func @Transpose(...) { return hbir.transpose(...) } | hbir.transpose | inputs: no limits output: same as inputs |
| Trilu | none | Trilu | unsupported | ||
| Unique | none | Unique | unsupported | ||
| Unsqueeze | The operator is replaced | Reshape | //opset 9 func.func @Reshape(...) { return hbir.reshape(...) } | hbir.reshape | inputs: no limits output: same as inputs |
| Upsample (deprecated) | The operator is replaced | Reshape | //opset 9 func.func @Reshape(...) { return hbir.reshape(...) } | hbir.reshape | inputs: no limits output: same as inputs |
| Where | The operator is splited | Cast | //opset 9 func.func @Cast(...) { return hbir.cast(...) } | hbir.cast | TBD |
| Equal | //opset 9 func.func @Equal(...) { return hbir.equal(...) } | hbir.equal | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 | ||
| Add | //opset 9 func.func @Add(...) { return hbir.add(...) } | hbir.add | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Mul | //opset 9 func.func @Mul(...) { return hbir.mul(...) } | hbir.mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Xor | none | Xor | unsupported | ||
| AffineGrid | none | AffineGrid | unsupported | ||
| Bernoulli | none | Bernoulli | unsupported | ||
| BlackmanWindow | none | BlackmanWindow | unsupported | ||
| CastLike | none | CastLike | unsupported | ||
| Celu | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| CenterCropPad | none | none | unsupported | ||
| Clip | The operator is converted into a look-up-table or is fused | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| DynamicQuantizeLinear | none | DynamicQuantizeLinear | unsupported | ||
| Elu | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Gelu | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| GreaterOrEqual | none | GreaterOrEqual | //opset 12 func.func @GreaterOrEqual(%lhs, %rhs) { %0 = hbir.greater(%lhs, %rhs) %1 = hbir.equal(%lhs, %rhs) %2 = hbir.logical_or(%0, %1) return %2 } | hbir.greater | TBD |
| hbir.equal | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 | ||||
| hbir.logical_or | TBD | ||||
| GroupNormalization | torch.nn.GroupNorm is splited automatically during torch exports model to onnx | Reshape | //opset 9 func.func @Reshape(...) { return hbir.reshape(...) } | hbir.reshape | inputs: no limits output: same as inputs |
| ReduceMean | //opset 9 func.func @ReduceMean(%x, %axes, %keepdims=1) { hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_mean | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input | ||
| Sub | //opset 9 func.func @Sub(...) { return hbir.sub(...) } | hbir.sub | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Mul | //opset 9 func.func @Mul(...) { return hbir.mul(...) } | hbir.mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| Add | //opset 9 func.func @Add(...) { return hbir.add(...) } | hbir.add | lhs: Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 | ||
| HammingWindow | none | HammingWindow | unsupported | ||
| HannWindow | none | HannWindow | unsupported | ||
| HardSigmoid | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| HardSwish | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| LayerNormalization | The operator is splited when it is a fixed point calculation operator | ReduceMean | //opset 9 func.func @ReduceMean(%x, %axes, %keepdims=1) { hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_mean | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input |
| GlobalAveragePool | //opset 9 func.func @GlobalAveragePool(%x) { hbir.reduce_mean(%x, dims=[-2, -1], keepDim=True) } | hbir.reduce_mean | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input | ||
| Sub | //opset 9 func.func @Sub(...) { return hbir.sub(...) } | hbir.sub | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| LeakyRelu | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| LessOrEqual | none | LessOrEqual | //opset 12 func.func @LessOrEqual(%lhs, %rhs) { %0 = hbir.less(%lhs, %rhs) %1 = hbir.equal(%lhs, %rhs) %2 = hbir.logical_or(%0, %1) return %2 } | hbir.less | TBD |
| hbir.equal | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Type: bool8 | ||||
| hbir.logical_or | TBD | ||||
| LogSoftmax | none | LogSoftmax | //opset 9 func.func @LogSoftmax(%data, %axis) { return hbir.log_softmax(%data, dim=%axis) } | hbir.log_softmax | TBD |
| MeanVarianceNormalization | none | MeanVarianceNormalization | unsupported | ||
| Mish | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| NegativeLogLikelihoodLoss | none | NegativeLogLikelihoodLoss | unsupported | ||
| PRelu | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Range | none | Range | None | None | None |
| ReduceL1 | The opeartor is splited | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| ReduceSum | //opset 9 func.func @ReduceSum(%x, %axes, %keepdims=1) { hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_sum | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input | ||
| ReduceL2 | The operator is splited | Pow | //opset 9 func.func @Pow(...) { return hbir.pow(...) } | hbir.pow | TBD |
| ReduceSum | //opset 9 func.func @ReduceSum(%x, %axes, %keepdims=1) { hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_sum | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input | ||
| Sqrt | //opset 9 func.func @Sqrt(...) { return hbir.sqrt(...) } | hbir.sqrt | input: Type: int8, int16 Shape: [*] output: Same as input | ||
| ReduceLogSum | none | ReduceLogSum | unsupported | ||
| ReduceLogSumExp | none | ReduceLogSumExp | unsupported | ||
| ReduceSumSquare | none | ReduceSumSquare | unsupported | ||
| Relu | The operator is deleted/fused/folded | None | fully supported | ||
| Selu | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| SequenceMap | none | SequenceMap | unsupported | ||
| Shrink | none | Shrink | unsupported | ||
| Softmax | The operator is splited when it is a fixed point calculation operator | Sub | //opset 9 func.func @Sub(...) { return hbir.sub(...) } | hbir.sub | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs |
| HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 | ||
| ReduceSum | //opset 9 func.func @ReduceSum(%x, %axes, %keepdims=1) { hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_sum | input: Type: int8, int16 Shape: [*] Dim: reduce axis dim size ∈ [1, 16384] output: Same as input | ||
| ReduceMax | //opset 9 func.func @ReduceMax(%x, %axes, %keepdims=1) { hbir.reduce_max(%x, dims=%axes, keepDim=bool(keepdims)) | hbir.reduce_max | input: Type: int8, int16 Shape: [*] output: Same as input | ||
| Reciprocal | //opset 9 func.func @Reciprocal(...) { return hbir.reciprocal(...) } | hbir.reciprocal | input: Type: int8, int16 Shape: [*] output: Same as input | ||
| Mul | //opset 9 func.func @Mul(...) { return hbir.mul(...) } | hbir.mul | lhs: Type: int8, int16 Shape: [*] rhs: Same as lhs output: Same as lhs | ||
| SoftmaxCrossEntropyLoss | none | SoftmaxCrossEntropyLoss | unsupported | ||
| Softplus | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| Softsign | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |
| ThresholdedRelu | The operator is converted into a look-up-table operator | HzLut | //opset 9 func.func @HzLut(...) { return b30.lut(...) } | b30.lut | inputs: Type: int8, int16 outputs: If input is int8, output is int8; if input is int16, output is int8/int16 |