概率作为向量还是ZipMap

分类器通常返回一个概率矩阵。默认情况下,sklearn-onnx 将该矩阵转换为一个字典列表,其中每个概率映射到其类别 ID 或名称。这种机制保留了类别名称。此转换会增加预测时间,并且并非总是必需的。让我们看看如何在 Iris 示例上取消此行为。

训练模型并转换它

from timeit import repeat
import numpy
import sklearn
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
import onnxruntime as rt
import onnx
import skl2onnx
from skl2onnx.common.data_types import FloatTensorType
from skl2onnx import convert_sklearn
from sklearn.linear_model import LogisticRegression

iris = load_iris()
X, y = iris.data, iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
clr = LogisticRegression(max_iter=500)
clr.fit(X_train, y_train)
print(clr)

initial_type = [("float_input", FloatTensorType([None, 4]))]
onx = convert_sklearn(clr, initial_types=initial_type, target_opset=12)
LogisticRegression(max_iter=500)

输出类型

让我们使用onnxruntime确认概率的输出类型是字典列表。

sess = rt.InferenceSession(onx.SerializeToString(), providers=["CPUExecutionProvider"])
res = sess.run(None, {"float_input": X_test.astype(numpy.float32)})
print(res[1][:2])
print("probabilities type:", type(res[1]))
print("type for the first observations:", type(res[1][0]))
[{0: 0.001847358187660575, 1: 0.694525957107544, 2: 0.3036267161369324}, {0: 5.262259037408512e-06, 1: 0.027987273409962654, 2: 0.9720074534416199}]
probabilities type: <class 'list'>
type for the first observations: <class 'dict'>

不使用ZipMap

让我们移除ZipMap操作符。

initial_type = [("float_input", FloatTensorType([None, 4]))]
options = {id(clr): {"zipmap": False}}
onx2 = convert_sklearn(
    clr, initial_types=initial_type, options=options, target_opset=12
)

sess2 = rt.InferenceSession(
    onx2.SerializeToString(), providers=["CPUExecutionProvider"]
)
res2 = sess2.run(None, {"float_input": X_test.astype(numpy.float32)})
print(res2[1][:2])
print("probabilities type:", type(res2[1]))
print("type for the first observations:", type(res2[1][0]))
[[1.8473582e-03 6.9452596e-01 3.0362672e-01]
 [5.2622590e-06 2.7987273e-02 9.7200745e-01]]
probabilities type: <class 'numpy.ndarray'>
type for the first observations: <class 'numpy.ndarray'>

每个类别一个输出

此选项会移除最终的ZipMap操作符,并将概率拆分成列。最终模型会为一个标签产生一个输出,并为每个类别产生一个输出。

options = {id(clr): {"zipmap": "columns"}}
onx3 = convert_sklearn(
    clr, initial_types=initial_type, options=options, target_opset=12
)

sess3 = rt.InferenceSession(
    onx3.SerializeToString(), providers=["CPUExecutionProvider"]
)
res3 = sess3.run(None, {"float_input": X_test.astype(numpy.float32)})
for i, out in enumerate(sess3.get_outputs()):
    print(
        "output: '{}' shape={} values={}...".format(
            out.name, res3[i].shape, res3[i][:2]
        )
    )
output: 'output_label' shape=(38,) values=[1 2]...
output: 'i0' shape=(38,) values=[1.8473582e-03 5.2622590e-06]...
output: 'i1' shape=(38,) values=[0.69452596 0.02798727]...
output: 'i2' shape=(38,) values=[0.30362672 0.97200745]...

让我们比较预测时间

X32 = X_test.astype(numpy.float32)

print("Time with ZipMap:")
print(repeat(lambda: sess.run(None, {"float_input": X32}), number=100, repeat=10))

print("Time without ZipMap:")
print(repeat(lambda: sess2.run(None, {"float_input": X32}), number=100, repeat=10))

print("Time without ZipMap but with columns:")
print(repeat(lambda: sess3.run(None, {"float_input": X32}), number=100, repeat=10))

# The prediction is much faster without ZipMap
# on this example.
# The optimisation is even faster when the classes
# are described with strings and not integers
# as the final result (list of dictionaries) may copy
# many times the same information with onnxruntime.
Time with ZipMap:
[0.006723120997776277, 0.0033048049990611617, 0.002702183999645058, 0.00259514600111288, 0.002283028999954695, 0.002864047000912251, 0.001961175999895204, 0.0019503219991747756, 0.0020229100009601098, 0.0034202739989268593]
Time without ZipMap:
[0.00287627999932738, 0.002934403000836028, 0.002290534997882787, 0.0010132009992958046, 0.0009289869994972833, 0.0010291920007148292, 0.0008970989983936306, 0.0009155870029644575, 0.000863607998326188, 0.0012434320015017875]
Time without ZipMap but with columns:
[0.0062263910003821366, 0.005719844000850571, 0.005481091000547167, 0.005109049001475796, 0.002306667000084417, 0.0021617760030494537, 0.0019600490013544913, 0.001912022999022156, 0.0018453459997544996, 0.0017368619992339518]

本示例使用的版本

print("numpy:", numpy.__version__)
print("scikit-learn:", sklearn.__version__)
print("onnx: ", onnx.__version__)
print("onnxruntime: ", rt.__version__)
print("skl2onnx: ", skl2onnx.__version__)
numpy: 2.2.0
scikit-learn: 1.6.0
onnx:  1.18.0
onnxruntime:  1.21.0+cu126
skl2onnx:  1.18.0

脚本总运行时间: (0分钟 0.199秒)

由Sphinx-Gallery生成