ONNX Operator Support List

ONNX Operator NameHMCT convert descriptionPTQ ONNX OperatorMap Description & Graph Fusion DescriptionHBIR Operator NameBPU Support Constraints
AbsThe operator is converted into a look-up table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
AcosThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
AcoshThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
AddnoneAdd//opset 9
func.func @Add(...) {
return hbir.add(...)
}
hbir.addlhs:
Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
AndnoneAnd//opset 9
func.func @And(...) {
return hbir.logical_and(...)
}
hbir.logical_andlhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Type: bool8
ArgMaxnoneArgMax//opset 9
func.func @ArgMax(%x, %axis=0, %keepdims=1) {
hbir.reduce_argmax(%x, dims=[%axis], keepDim=bool(%keepdims))
hbir.reduce_argmaxinput:
Type: int8, int16
Shape: [*]
output:
Same as input, but output can be of type int32 or int64, as long as the size of the reduced axis can be represented using an int16 number
ArgMinnoneArgMin//opset 9
func.func @ArgMin(%x, %axis=0, %keepdims=1) {
hbir.reduce_argmin(%x, dims=[%axis], keepDim=bool(%keepdims))
hbir.reduce_argminTBD
AsinThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
AsinhThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
AtanThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
AtanhThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
AveragePoolnoneAveragePool//opset 9
func.func @AveragePool(%x, %kernel_shape,%pads, %strides, %auto_pad="NOTSET", %ceil_mode=0) {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
%1 = hbir.avg_pool2d(%0, kernel_shape, strides, pads, bool(ceil_mode))
%2 = hbir.transpose(%1, [0, 3, 1, 2])
return %2
}
hbir.avg_pool2dinput:
Type: int8, int16
Shape: [*,H,W,C]
output:
Same as input
kernel:
Shape: [KH,KW]
Dim: KH, KW ∈ [1, 256]
stride:
Shape: [SH,SW]
Dim: SH, SW ∈ [1, 256]
pad:
Shape: [PN,PH,PW,PC]
PN,PH,PW,PC ∈ [-3, 256]
hbir.transposeinputs:
no limits
output:
same as inputs
BatchNormalizationThe operator is deleted/fused/foldedNonefully supported
BitShiftnoneBitShiftunsupported
BitwiseAndnoneBitwiseAndunsupported
BitwiseNotnoneBitwiseNotunsupported
BitwiseOrnoneBitwiseOrunsupported
BitwiseXornoneBitwiseXorunsupported
CastnoneCast//opset 9
func.func @Cast(...) {
return hbir.cast(...)
}
hbir.castTBD
CeilThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
Col2ImnoneCol2Imunsupported
CompressnoneCompressunsupported
ConcatnoneConcat//opset 9
func.func @Concat(%*args, %axis) {
return hbir.concat(%args, dim=%axis)
}
hbir.concatinputs:
Arg Number: inputs number ∈ [1, 1024]
Dim: all dims < 131072
size < 2G
output:
same as inputs
ConcatFromSequencenoneConcatFromSequenceunsupported
ConstantThe operator is deleted/fused/foldedNonefully supported
ConstantOfShapeThe operator is deleted/fused/foldedNonefully supported
ConvnoneConv//opset 9
func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
%1 = hbir.transpose(%w, [0, 2, 3, 1])
%2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b)
%3 = hbir.transpose(%2, [0, 3, 1, 2])
return %3
}
hbir.transposeinputs:
no limits
output:
same as inputs
hbir.conv2dinput:
Type: int8, int16; input and weight cannot both be int16
Shape: [*,H,W,C]
Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536]
weight:
Type: int8, int16; input and weight cannot both be int16
Shape: [N,KH,KW,C]
Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192]
Size: KH\*KW\*C ∈ [1, 65536]
bias:
Type: f32
output:
Type: int8, int16, int32
Other constraints: same as fin
stride:
Shape: [SH,SW]
Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1
pad:
Shape: [PN,PH,PW,PC]
Dim: PN,PH,PW,PC ∈ [-1, 256]
groupNum:
fin.c is divisible by group number
dilation:
Shape: [DH,DW]
Dim: DH,DW ∈ [1, 18]
others:
Don't support even stride when conv is a depthwise conv
For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1
ConvIntegernoneConvIntegerunsupported
ConvTransposenoneConvTranspose//opset 9
func.func @ConvTranspose(%x, %w, %b, %auto_pad="NOTSET", %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
%1 = hbir.transpose(%w, [0, 2, 3, 1])
%2 = hbir.conv2dtranspose(%0, %1, %strides, %pads, %dilations, %group, bias=%b, illegalweight=True)
%3 = hbir.transpose(%2, [0, 3, 1, 2])
return %3
}
hbir.transposeinputs:
no limits
output:
same as inputs
hbir.conv2dtransposeinput:
Type: int8, int16; input and weight cannot both be int16
Shape: [*,H,W,C]
Dim: * ∈ [1, 128]; H,W ∈ [1, 65536]; C ∈ [1, 2048]
weight:
Type: int8, int16; input and weight cannot both be int16
Shape: [N,KH,KW,C]
Dim: N,C ∈ [1, 2048]; KH,KW ∈ [1, 14]
Size: KH\*KW\*C ∈ [1, 65536]
bias:
Type: f32
output:
Same as input, the type additionally supports int32
stride:
Shape: [SH,SW]
Dim: SH,SW ∈ [1, 14]; SH < KH; SW < KW;
pad:
Shape: [PN,PH,PW,PC]
Dim: PN,PH,PW,PC ∈ [0, 256]
dilation:
Shape: [DH,DW]
Dim: DH,DW ∈ {1}
CosThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
CoshnoneHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
CumSumnoneCumSumunsupported
DFTnoneDFTunsupported
DepthToSpacenoneDepthToSpaceunsupported
DequantizeLinearnoneDequantizeLinear//opset 9
func.func @DequantizeLinear(%x, %x_scale, %x_zero_point){
return qnt.dequantize(%x, scales=%x_scale, zeros=%x_zero_point)
}
qnt.dequantizeinputs:
no limits
outputs:
no limits
DetnoneDetunsupported
DivThe operator is splitedHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
Mul//opset 9
func.func @Mul(...) {
return hbir.mul(...)
}
hbir.mullhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
DropoutThe operator is deleted/fused/foldedDropoutunsupported
EinsumnoneEinsumunsupported
EqualnoneEqual//opset 9
func.func @Equal(...) {
return hbir.equal(...)
}
hbir.equallhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Type: bool8
ErfThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
ExpnoneHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
ExpandnoneExpand//opset 9
func.func @Expand(%data, %shape) {
// %original_shape和%repeat_list由%data.shape和%shape计算得出
%0 = hbir.reshape(%data, %original_shape)
%1 = hbir.tile(%data, %repeat_list)
return %1
}
hbir.reshapeinputs:
no limits
output:
same as inputs
hbir.tileinputs:
no limits
output:
same as inputs
EyeLikenoneEyeLikeunsupported
FlattenThe operator is replacedReshape//opset 9
func.func @Reshape(...) {
return hbir.reshape(...)
}
hbir.reshapeinputs:
no limits
output:
same as inputs
FloorThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
GRUnoneGRUunsupported
GathernoneGather//opset 9
func.func @Gather(%x, %indices, %axis=0) {
return hbir.index(%x, index=%indices, dim=%axis)
}
hbir.indexTBD
GatherElementsnoneGatherElements//opset 9
func.func @GatherElements(%data, %indices, %axis=0) {
return hbir.gather(%data, %indices, dim=%axis)
}
hbir.gatherUnsupported
GatherNDnoneGatherND//opset 9
func.func @GatherND(%data, %indices, %dims=0) {
return hbir.gather_nd(%data, %indices, dim=np.array(%dims))
}
hbir.gather_ndTBD
GemmThe operator is replacedConv//opset 9
func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
%1 = hbir.transpose(%w, [0, 2, 3, 1])
%2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b)
%3 = hbir.transpose(%2, [0, 3, 1, 2])
return %3
}
hbir.transposeinputs:
no limits
output:
same as inputs
hbir.conv2dinput:
Type: int8, int16; input and weight cannot both be int16
Shape: [*,H,W,C]
Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536]
weight:
Type: int8, int16; input and weight cannot both be int16
Shape: [N,KH,KW,C]
Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192]
Size: KH\*KW\*C ∈ [1, 65536]
bias:
Type: f32
output:
Type: int8, int16, int32
Other constraints: same as fin
stride:
Shape: [SH,SW]
Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1
pad:
Shape: [PN,PH,PW,PC]
Dim: PN,PH,PW,PC ∈ [-1, 256]
groupNum:
fin.c is divisible by group number
dilation:
Shape: [DH,DW]
Dim: DH,DW ∈ [1, 18]
others:
Don't support even stride when conv is a depthwise conv
For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1
GlobalAveragePoolnoneGlobalAveragePool//opset 9
func.func @GlobalAveragePool(%x) {
hbir.reduce_mean(%x, dims=[-2, -1], keepDim=True)
}
hbir.reduce_meaninput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
GlobalLpPoolnoneGlobalLpPoolunsupported
GlobalMaxPoolnoneGlobalMaxPool//opset 9
func.func @GlobalMaxPool(%x) {
hbir.reduce_max(%x, dims=[-2, -1], keepDim=True)
}
hbir.reduce_maxinput:
Type: int8, int16
Shape: [*]
output:
Same as input
GreaternoneGreater//opset 9
func.func @Greater(...) {
return hbir.greater(...)
}
hbir.greaterTBD
GridSamplenoneGridSampleunsupported
HardmaxnoneHardmaxunsupported
IdentityThe operator is deleted/fused/foldedIdentityunsupported
IfThe operator is deleted/fused/foldedNonefully supported
ImageDecodernoneImageDecoderunsupported
InstanceNormalizationThe operator is splitedReduceMean//opset 9
func.func @ReduceMean(%x, %axes, %keepdims=1) {
hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_meaninput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
Sub//opset 9
func.func @Sub(...) {
return hbir.sub(...)
}
hbir.sublhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
Mul//opset 9
func.func @Mul(...) {
return hbir.mul(...)
}
hbir.mullhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
Add//opset 9
func.func @Add(...) {
return hbir.add(...)
}
hbir.addlhs:
Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
HzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
IsInfnoneIsInfunsupported
IsNaNnoneIsNaNunsupported
LRNnoneLRNunsupported
LSTMThe operator is splitedHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
Split//opset 9
func.func @Split(%x, %axis=0, %split) {
ret_list = []
for i in range(len(%split)):
// %begin和%end由%axis,%split和%x.shape计算得出, %step由%x.shape计算得出
ret_list.append(hbir.slice(%x, begin=%begin, end=%end, step=%step))
return ret_list
}
hbir.sliceinput:
Dim: all dims < 2097152
output:
Same as input
Mul//opset 9
func.func @Mul(...) {
return hbir.mul(...)
}
hbir.mullhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
Concat//opset 9
func.func @Concat(%*args, %axis) {
return hbir.concat(%args, dim=%axis)
}
hbir.concatinputs:
Arg Number: inputs number ∈ [1, 1024]
Dim: all dims < 131072
size < 2G
output:
same as inputs
Transpose//opset 9
func.func @Transpose(...) {
return hbir.transpose(...)
}
hbir.transposeinputs:
no limits
output:
same as inputs
Add//opset 9
func.func @Add(...) {
return hbir.add(...)
}
hbir.addlhs:
Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
Conv//opset 9
func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
%1 = hbir.transpose(%w, [0, 2, 3, 1])
%2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b)
%3 = hbir.transpose(%2, [0, 3, 1, 2])
return %3
}
hbir.transposeinputs:
no limits
output:
same as inputs
hbir.conv2dinput:
Type: int8, int16; input and weight cannot both be int16
Shape: [*,H,W,C]
Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536]
weight:
Type: int8, int16; input and weight cannot both be int16
Shape: [N,KH,KW,C]
Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192]
Size: KH\*KW\*C ∈ [1, 65536]
bias:
Type: f32
output:
Type: int8, int16, int32
Other constraints: same as fin
stride:
Shape: [SH,SW]
Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1
pad:
Shape: [PN,PH,PW,PC]
Dim: PN,PH,PW,PC ∈ [-1, 256]
groupNum:
fin.c is divisible by group number
dilation:
Shape: [DH,DW]
Dim: DH,DW ∈ [1, 18]
others:
Don't support even stride when conv is a depthwise conv
For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1
LessnoneLess//opset 9
func.func @Less(...) {
return hbir.less(...)
}
hbir.lessTBD
LogThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
LoopnoneLoopunsupported
LpNormalizationnoneLpNormalizationunsupported
LpPoolnoneLpPoolunsupported
MatMulThe operator is replaced to Conv when it's input is a constant valueMatMul//opset 9
func.func @MatMul(...) {
return hbir.matmul(...)
}
hbir.matmullhs:
Type: int8, int16; lhs and rhs cannot both be int16
Shape: [*,M,C]
Dim: * ∈ [1, 4096], M,C ∈ [1, 8192]
rhs:
Type: int8, int16; lhs and rhs cannot both be int16
Shape: [*,C,N]
Dim: * ∈ [1, 4096]; C,N ∈ [1, 8192]
output:
Type: int8, int16, int32
Shape: [*,M,N]
Other constraints: same as lhs
Conv//opset 9
func.func @Conv(%x, %w, %b, %dilations=(1,1), %group=1, %pads=(0,0,0,0), %strides=(1,1)) {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
%1 = hbir.transpose(%w, [0, 2, 3, 1])
%2 = hbir.conv2d(%0, %1, %strides, %pads, %dilations, %group, bias=%b)
%3 = hbir.transpose(%2, [0, 3, 1, 2])
return %3
}
hbir.transposeinputs:
no limits
output:
same as inputs
hbir.conv2dinput:
Type: int8, int16; input and weight cannot both be int16
Shape: [*,H,W,C]
Dim: * ∈ [1, 4096]; H,W,C ∈ [1, 65536]
weight:
Type: int8, int16; input and weight cannot both be int16
Shape: [N,KH,KW,C]
Dim: C ∈ [1, 8192]; KH,KW ∈ [1, 31]; N ∈ [1, 65536] if fout is the last layer of conv else [1, 8192]
Size: KH\*KW\*C ∈ [1, 65536]
bias:
Type: f32
output:
Type: int8, int16, int32
Other constraints: same as fin
stride:
Shape: [SH,SW]
Dim: SH,SW ∈ [1, 256]; SH,SW ∈ {1} if dilation > 1
pad:
Shape: [PN,PH,PW,PC]
Dim: PN,PH,PW,PC ∈ [-1, 256]
groupNum:
fin.c is divisible by group number
dilation:
Shape: [DH,DW]
Dim: DH,DW ∈ [1, 18]
others:
Don't support even stride when conv is a depthwise conv
For each group, fin.c ∈ [1, 8192], KH\*KW\*fin.c ∈ [1, 65535], fin.c = C when group = 1
MatMulIntegernoneMatMulIntegerunsupported
MaxThe operator is splitedMax//opset 9
func.func @Max(...) {
return hbir.max(...)
}
hbir.maxlhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
MaxPoolnoneMaxPoolunsupported
MaxRoiPoolnoneMaxRoiPoolunsupported
MaxUnpoolnoneMaxUnpoolunsupported
MeannoneMeanunsupported
MelWeightMatrixnoneMelWeightMatrixunsupported
MinThe operator is splitedMin//opset 9
func.func @Min(...) {
return hbir.min(...)
}
hbir.minlhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
ModnoneModunsupported
MulnoneMul//opset 9
func.func @Mul(...) {
return hbir.mul(...)
}
hbir.mullhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
MultinomialnoneMultinomialunsupported
NegnoneNegunsupported
NonMaxSuppressionnoneNonMaxSuppressionunsupported
NonZerononeNonZerounsupported
NotnoneNotunsupported
OneHotnoneOneHotNoneNoneNone
OptionalnoneOptionalunsupported
OptionalGetElementnoneOptionalGetElementunsupported
OptionalHasElementnoneOptionalHasElementunsupported
OrnoneOrunsupported
PadThe operator is deleted/fused/foldedNonefully supported
PowThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
QLinearConvnoneQLinearConvunsupported
QLinearMatMulnoneQLinearMatMulunsupported
QuantizeLinearThe operator is deleted/fused/foldedQuantizeLinearunsupported
RNNnoneRNNunsupported
RandomNormalThe operator is deleted/fused/foldedRandomNormalunsupported
RandomNormalLikeThe operator is deleted/fused/foldedRandomNormalLikeunsupported
RandomUniformThe operator is deleted/fused/foldedRandomUniformNoneNoneNone
RandomUniformLikeThe operator is deleted/fused/foldedRandomUniformLikeNoneNoneNone
ReciprocalThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
ReduceMaxnoneReduceMax//opset 9
func.func @ReduceMax(%x, %axes, %keepdims=1) {
hbir.reduce_max(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_maxinput:
Type: int8, int16
Shape: [*]
output:
Same as input
ReduceMeannoneReduceMean//opset 9
func.func @ReduceMean(%x, %axes, %keepdims=1) {
hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_meaninput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
ReduceMinnoneReduceMinunsupported
ReduceProdnoneReduceProdunsupported
ReduceSumnoneReduceSum//opset 9
func.func @ReduceSum(%x, %axes, %keepdims=1) {
hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_suminput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
RegexFullMatchnoneRegexFullMatchunsupported
ReshapenoneReshape//opset 9
func.func @Reshape(...) {
return hbir.reshape(...)
}
hbir.reshapeinputs:
no limits
output:
same as inputs
ResizenoneResize//opset 10
func.func @Resize(%x, %scales, %mode="nearest") {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
// %output_shape由%x,%scales,%mode计算得出
%1 = hbir.resize2d(%0, %step, np.array([-0.5, -0.5]), %mode, size=%output_shape[2:], expansionMode="border")
%2 = hbir.transpose(%1, [0, 3, 1, 2])
return 2%
}
//opset 11
func.func @Resize(%x, roi, scales, sizes, coordinate_transformation_mode="half_pixel", cubic_coeff_a="-0.75", exclude_outside=0,
extrapolation_value=0, mode="nearest", nearest_mode="round_prefer_floor") {
%0 = hbir.transpose(%x, [0, 2, 3, 1])
// %initial_offset,%mode,%output_shape由输入参数计算得出
%1 = hbir.resize2d(%0, %step, %initial_offset, %mode, size=%output_shape[2:], expansionMode="border")
%2 = hbir.transpose(%1, [0, 3, 1, 2])
return 2%
}
hbir.transposeinputs:
no limits
output:
same as inputs
hbir.resize2dinput:
Type: int8
Shape: [*,H,W,C]
output:
Same as input
ReverseSequencenoneReverseSequenceunsupported
RoiAlignnoneRoiAlignunsupported
RoundThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
STFTnoneSTFTunsupported
ScannoneScanunsupported
Scatter (deprecated)noneScatter (deprecated)unsupported
ScatterElementsnoneScatterElementsunsupported
ScatterNDnoneScatterND//opset 9
func.func @ScatterND(...) {
hbir.scatter_nd(...)
}
hbir.scatter_ndTBD
SequenceAtnoneSequenceAtunsupported
SequenceConstructnoneSequenceConstructunsupported
SequenceEmptynoneSequenceEmptyunsupported
SequenceErasenoneSequenceEraseunsupported
SequenceInsertnoneSequenceInsertunsupported
SequenceLengthnoneSequenceLengthunsupported
ShapeThe operator is deleted/fused/foldednoneunsupported
SigmoidThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
SignThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
SinThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
SinhThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
SizeThe operator is deleted/fused/foldedNonefully supported
SlicenoneSlice//opset 9
func.func @Slice(%x, %starts, %ends, %axes=None) {
// %new_starts和%new_ends由%starts, %ends和%axes计算得出
return hbir.slice(%x, begin=%new_starts, end=%new_ends, step=1)
}
//opset 10
func.func @Slice(%x, %starts, %ends, %*args) {
// %new_starts, %new_ends和%steps由%starts, %ends和%*args计算得出
return hbir.slice(%x, begin=%new_starts, end=%new_ends, step=%steps)
}
hbir.sliceinput:
Dim: all dims < 2097152
output:
Same as input
SpaceToDepthnoneSpaceToDepth//opset 9
func.func @SpaceToDepth(%x, %blocksize) {
// %n, %c, %h, %w 由x.shape得出
%0 = hbir.reshape(%x, (%n, %c, %h // %blocksize, %blocksize, %w // %blocksize, %blocksize))
%1 = hbir.transpose(%0, [0, 3, 5, 1, 2, 4])
%2 = hbir.reshape(%1, (%n, %c * (%blocksize * %blocksize), %h // %blocksize, %w // %blocksize))
return %2
}
hbir.reshapeinputs:
no limits
output:
same as inputs
hbir.transposeinputs:
no limits
output:
same as inputs
SplitnoneSplit//opset 9
func.func @Split(%x, %axis=0, %split) {
ret_list = []
for i in range(len(%split)):
// %begin和%end由%axis,%split和%x.shape计算得出, %step由%x.shape计算得出
ret_list.append(hbir.slice(%x, begin=%begin, end=%end, step=%step))
return ret_list
}
hbir.sliceinput:
Dim: all dims < 2097152
output:
Same as input
SplitToSequencenoneSplitToSequenceunsupported
SqrtThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
SqueezeThe operator is replacedReshape//opset 9
func.func @Reshape(...) {
return hbir.reshape(...)
}
hbir.reshapeinputs:
no limits
output:
same as inputs
StringConcatnoneStringConcatunsupported
StringNormalizernoneStringNormalizerunsupported
StringSplitnoneStringSplitunsupported
SubnoneSub//opset 9
func.func @Sub(...) {
return hbir.sub(...)
}
hbir.sublhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
SumThe operator is replacedAdd//opset 9
func.func @Add(...) {
return hbir.add(...)
}
hbir.addlhs:
Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
TanThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
TanhThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
TfIdfVectorizernoneTfIdfVectorizerunsupported
TilenoneTile//opset 9
func.func @Tile(...) {
return hbir.tile(...)
}
hbir.tileinputs:
no limits
output:
same as inputs
TopKnoneTopK//opset 9,10
func.func @TopK(%data, %k, %axis) {
return hbir.topk(%data, k=%k, dim=%axis, largest=bool(1), sorted=bool(1))
}
//opset 11
func.func @TopK(%data, %k, %axis, %largest=1, %sorted=1) {
return hbir.topk(%data, k=%k, dim=%axis, largest=bool(largest), sorted=bool(sorted))
}
hbir.topkUnsupported
TransposenoneTranspose//opset 9
func.func @Transpose(...) {
return hbir.transpose(...)
}
hbir.transposeinputs:
no limits
output:
same as inputs
TrilunoneTriluunsupported
UniquenoneUniqueunsupported
UnsqueezeThe operator is replacedReshape//opset 9
func.func @Reshape(...) {
return hbir.reshape(...)
}
hbir.reshapeinputs:
no limits
output:
same as inputs
Upsample (deprecated)The operator is replacedReshape//opset 9
func.func @Reshape(...) {
return hbir.reshape(...)
}
hbir.reshapeinputs:
no limits
output:
same as inputs
WhereThe operator is splitedCast//opset 9
func.func @Cast(...) {
return hbir.cast(...)
}
hbir.castTBD
Equal//opset 9
func.func @Equal(...) {
return hbir.equal(...)
}
hbir.equallhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Type: bool8
Add//opset 9
func.func @Add(...) {
return hbir.add(...)
}
hbir.addlhs:
Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
Mul//opset 9
func.func @Mul(...) {
return hbir.mul(...)
}
hbir.mullhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
XornoneXorunsupported
AffineGridnoneAffineGridunsupported
BernoullinoneBernoulliunsupported
BlackmanWindownoneBlackmanWindowunsupported
CastLikenoneCastLikeunsupported
CeluThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
CenterCropPadnonenoneunsupported
ClipThe operator is converted into a look-up-table or is fusedHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
DynamicQuantizeLinearnoneDynamicQuantizeLinearunsupported
EluThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
GeluThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
GreaterOrEqualnoneGreaterOrEqual//opset 12
func.func @GreaterOrEqual(%lhs, %rhs) {
%0 = hbir.greater(%lhs, %rhs)
%1 = hbir.equal(%lhs, %rhs)
%2 = hbir.logical_or(%0, %1)
return %2
}
hbir.greaterTBD
hbir.equallhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Type: bool8
hbir.logical_orTBD
GroupNormalizationtorch.nn.GroupNorm is splited automatically during torch exports model to onnxReshape//opset 9
func.func @Reshape(...) {
return hbir.reshape(...)
}
hbir.reshapeinputs:
no limits
output:
same as inputs
ReduceMean//opset 9
func.func @ReduceMean(%x, %axes, %keepdims=1) {
hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_meaninput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
Sub//opset 9
func.func @Sub(...) {
return hbir.sub(...)
}
hbir.sublhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
Mul//opset 9
func.func @Mul(...) {
return hbir.mul(...)
}
hbir.mullhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
Add//opset 9
func.func @Add(...) {
return hbir.add(...)
}
hbir.addlhs:
Type: int8, int16, int32, if type is int32, this hbir.add op must be fusible to a Conv op
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
HzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
HammingWindownoneHammingWindowunsupported
HannWindownoneHannWindowunsupported
HardSigmoidThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
HardSwishThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
LayerNormalizationThe operator is splited when it is a fixed point calculation operatorReduceMean//opset 9
func.func @ReduceMean(%x, %axes, %keepdims=1) {
hbir.reduce_mean(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_meaninput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
GlobalAveragePool//opset 9
func.func @GlobalAveragePool(%x) {
hbir.reduce_mean(%x, dims=[-2, -1], keepDim=True)
}
hbir.reduce_meaninput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
Sub//opset 9
func.func @Sub(...) {
return hbir.sub(...)
}
hbir.sublhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
LeakyReluThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
LessOrEqualnoneLessOrEqual//opset 12
func.func @LessOrEqual(%lhs, %rhs) {
%0 = hbir.less(%lhs, %rhs)
%1 = hbir.equal(%lhs, %rhs)
%2 = hbir.logical_or(%0, %1)
return %2
}
hbir.lessTBD
hbir.equallhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Type: bool8
hbir.logical_orTBD
LogSoftmaxnoneLogSoftmax//opset 9
func.func @LogSoftmax(%data, %axis) {
return hbir.log_softmax(%data, dim=%axis)
}
hbir.log_softmaxTBD
MeanVarianceNormalizationnoneMeanVarianceNormalizationunsupported
MishThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
NegativeLogLikelihoodLossnoneNegativeLogLikelihoodLossunsupported
PReluThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
RangenoneRangeNoneNoneNone
ReduceL1The opeartor is splitedHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
ReduceSum//opset 9
func.func @ReduceSum(%x, %axes, %keepdims=1) {
hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_suminput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
ReduceL2The operator is splitedPow//opset 9
func.func @Pow(...) {
return hbir.pow(...)
}
hbir.powTBD
ReduceSum//opset 9
func.func @ReduceSum(%x, %axes, %keepdims=1) {
hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_suminput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
Sqrt//opset 9
func.func @Sqrt(...) {
return hbir.sqrt(...)
}
hbir.sqrtinput:
Type: int8, int16
Shape: [*]
output:
Same as input
ReduceLogSumnoneReduceLogSumunsupported
ReduceLogSumExpnoneReduceLogSumExpunsupported
ReduceSumSquarenoneReduceSumSquareunsupported
ReluThe operator is deleted/fused/foldedNonefully supported
SeluThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
SequenceMapnoneSequenceMapunsupported
ShrinknoneShrinkunsupported
SoftmaxThe operator is splited when it is a fixed point calculation operatorSub//opset 9
func.func @Sub(...) {
return hbir.sub(...)
}
hbir.sublhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
HzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
ReduceSum//opset 9
func.func @ReduceSum(%x, %axes, %keepdims=1) {
hbir.reduce_sum(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_suminput:
Type: int8, int16
Shape: [*]
Dim: reduce axis dim size ∈ [1, 16384]
output:
Same as input
ReduceMax//opset 9
func.func @ReduceMax(%x, %axes, %keepdims=1) {
hbir.reduce_max(%x, dims=%axes, keepDim=bool(keepdims))
hbir.reduce_maxinput:
Type: int8, int16
Shape: [*]
output:
Same as input
Reciprocal//opset 9
func.func @Reciprocal(...) {
return hbir.reciprocal(...)
}
hbir.reciprocalinput:
Type: int8, int16
Shape: [*]
output:
Same as input
Mul//opset 9
func.func @Mul(...) {
return hbir.mul(...)
}
hbir.mullhs:
Type: int8, int16
Shape: [*]
rhs:
Same as lhs
output:
Same as lhs
SoftmaxCrossEntropyLossnoneSoftmaxCrossEntropyLossunsupported
SoftplusThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
SoftsignThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16
ThresholdedReluThe operator is converted into a look-up-table operatorHzLut//opset 9
func.func @HzLut(...) {
return b30.lut(...)
}
b30.lutinputs:
Type: int8, int16
outputs:
If input is int8, output is int8; if input is int16, output is int8/int16