注意
转到结尾 下载完整的示例代码
ONNX 转换基准测试¶
示例 训练和部署 scikit-learn 管道 转换了一个简单的模型。此示例采用类似的示例,但使用随机数据,并比较每个选项计算预测所需的时间。
训练管道¶
import numpy
from pandas import DataFrame
from tqdm import tqdm
from onnx.reference import ReferenceEvaluator
from sklearn import config_context
from sklearn.datasets import make_regression
from sklearn.ensemble import (
GradientBoostingRegressor,
RandomForestRegressor,
VotingRegressor,
)
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from onnxruntime import InferenceSession
from skl2onnx import to_onnx
from skl2onnx.tutorial import measure_time
N = 11000
X, y = make_regression(N, n_features=10)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.01)
print("Train shape", X_train.shape)
print("Test shape", X_test.shape)
reg1 = GradientBoostingRegressor(random_state=1)
reg2 = RandomForestRegressor(random_state=1)
reg3 = LinearRegression()
ereg = VotingRegressor([("gb", reg1), ("rf", reg2), ("lr", reg3)])
ereg.fit(X_train, y_train)
Train shape (110, 10)
Test shape (10890, 10)
测量处理时间¶
我们使用函数 skl2onnx.tutorial.measure_time()
。关于 assume_finite 的页面可能对您很有用,如果您需要优化预测。我们测量每个观测值的处理时间,无论该观测值是否属于批次或单个观测值。
sizes = [(1, 50), (10, 50), (100, 10)]
with config_context(assume_finite=True):
obs = []
for batch_size, repeat in tqdm(sizes):
context = {"ereg": ereg, "X": X_test[:batch_size]}
mt = measure_time(
"ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat
)
mt["size"] = context["X"].shape[0]
mt["mean_obs"] = mt["average"] / mt["size"]
obs.append(mt)
df_skl = DataFrame(obs)
df_skl
0%| | 0/3 [00:00<?, ?it/s]
33%|███▎ | 1/3 [00:07<00:14, 7.06s/it]
67%|██████▋ | 2/3 [00:12<00:06, 6.25s/it]
100%|██████████| 3/3 [00:14<00:00, 4.01s/it]
100%|██████████| 3/3 [00:14<00:00, 4.70s/it]
图形。
df_skl.set_index("size")[["mean_obs"]].plot(title="scikit-learn", logx=True, logy=True)
ONNX 运行时¶
对两个可用的 ONNX 运行时执行相同的操作。
onx = to_onnx(ereg, X_train[:1].astype(numpy.float32), target_opset=14)
sess = InferenceSession(onx.SerializeToString(), providers=["CPUExecutionProvider"])
oinf = ReferenceEvaluator(onx)
obs = []
for batch_size, repeat in tqdm(sizes):
# scikit-learn
context = {"ereg": ereg, "X": X_test[:batch_size].astype(numpy.float32)}
mt = measure_time(
"ereg.predict(X)", context, div_by_number=True, number=10, repeat=repeat
)
mt["size"] = context["X"].shape[0]
mt["skl"] = mt["average"] / mt["size"]
# onnxruntime
context = {"sess": sess, "X": X_test[:batch_size].astype(numpy.float32)}
mt2 = measure_time(
"sess.run(None, {'X': X})[0]",
context,
div_by_number=True,
number=10,
repeat=repeat,
)
mt["ort"] = mt2["average"] / mt["size"]
# ReferenceEvaluator
context = {"oinf": oinf, "X": X_test[:batch_size].astype(numpy.float32)}
mt2 = measure_time(
"oinf.run(None, {'X': X})[0]",
context,
div_by_number=True,
number=10,
repeat=repeat,
)
mt["pyrt"] = mt2["average"] / mt["size"]
# end
obs.append(mt)
df = DataFrame(obs)
df
0%| | 0/3 [00:00<?, ?it/s]
33%|███▎ | 1/3 [00:15<00:31, 15.60s/it]
67%|██████▋ | 2/3 [00:40<00:21, 21.10s/it]
100%|██████████| 3/3 [01:03<00:00, 21.84s/it]
100%|██████████| 3/3 [01:03<00:00, 21.09s/it]
图形。
df.set_index("size")[["skl", "ort", "pyrt"]].plot(
title="Average prediction time per runtime", logx=True, logy=True
)
ONNX 运行时比 scikit-learn 预测单个观测值快得多。 scikit-learn 针对训练和批量预测进行了优化。这解释了为什么 scikit-learn 和 ONNX 运行时对于大型批次似乎会收敛。它们使用类似的实现、并行化和语言(C++,openmp)。
脚本的总运行时间:(1 分 19.181 秒)