注意
前往末尾下载完整示例代码。
转换包含 LightGBM 回归器的管道¶
使用浮点数和 TreeEnsemble 运算符时观察到的差异(参见 切换到浮点数时的问题)解释了为什么 LGBMRegressor 的转换器即使在使用浮点张量时也可能引入显著差异。
库 lightgbm 是用双精度浮点数实现的。具有多个树的随机森林回归器通过将每棵树的预测相加来计算其预测。转换为 ONNX 后,这个求和变成 ,其中 F 是森林中的树的数量,
是树 i 的输出,
是浮点数加法。差异可以表示为
。这随着森林中树的数量而增长。
为了减少影响,增加了一个选项,将节点 TreeEnsembleRegressor 分割成多个节点,这次使用双精度浮点数进行求和。如果我们假设节点被分割成 a 个节点,那么差异就变成 。
训练 LGBMRegressor¶
import packaging.version as pv
import warnings
import timeit
import numpy
from pandas import DataFrame
import matplotlib.pyplot as plt
from tqdm import tqdm
from lightgbm import LGBMRegressor
from onnxruntime import InferenceSession
from skl2onnx import to_onnx, update_registered_converter
from skl2onnx.common.shape_calculator import (
calculate_linear_regressor_output_shapes,
)
from onnxmltools import __version__ as oml_version
from onnxmltools.convert.lightgbm.operator_converters.LightGbm import (
convert_lightgbm,
)
N = 1000
X = numpy.random.randn(N, 20)
y = numpy.random.randn(N) + numpy.random.randn(N) * 100 * numpy.random.randint(
0, 1, 1000
)
reg = LGBMRegressor(n_estimators=1000)
reg.fit(X, y)
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000475 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 5100
[LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 20
[LightGBM] [Info] Start training from score 0.033823
注册 LGBMClassifier 的转换器¶
转换器在 onnxmltools 中实现:onnxmltools…LightGbm.py。以及形状计算器:onnxmltools…Regressor.py。
def skl2onnx_convert_lightgbm(scope, operator, container):
options = scope.get_options(operator.raw_operator)
if "split" in options:
if pv.Version(oml_version) < pv.Version("1.9.2"):
warnings.warn(
"Option split was released in version 1.9.2 but %s is "
"installed. It will be ignored." % oml_version,
stacklevel=0,
)
operator.split = options["split"]
else:
operator.split = None
convert_lightgbm(scope, operator, container)
update_registered_converter(
LGBMRegressor,
"LightGbmLGBMRegressor",
calculate_linear_regressor_output_shapes,
skl2onnx_convert_lightgbm,
options={"split": None},
)
转换¶
我们按照两种场景转换同一模型:单个 TreeEnsembleRegressor 节点,或多个节点。参数 split 是每个 TreeEnsembleRegressor 节点的树数量。
model_onnx = to_onnx(
reg, X[:1].astype(numpy.float32), target_opset={"": 14, "ai.onnx.ml": 2}
)
model_onnx_split = to_onnx(
reg,
X[:1].astype(numpy.float32),
target_opset={"": 14, "ai.onnx.ml": 2},
options={"split": 100},
)
差异¶
sess = InferenceSession(
model_onnx.SerializeToString(), providers=["CPUExecutionProvider"]
)
sess_split = InferenceSession(
model_onnx_split.SerializeToString(), providers=["CPUExecutionProvider"]
)
X32 = X.astype(numpy.float32)
expected = reg.predict(X32)
got = sess.run(None, {"X": X32})[0].ravel()
got_split = sess_split.run(None, {"X": X32})[0].ravel()
disp = numpy.abs(got - expected).sum()
disp_split = numpy.abs(got_split - expected).sum()
print("sum of discrepancies 1 node", disp)
print("sum of discrepancies split node", disp_split, "ratio:", disp / disp_split)
sum of discrepancies 1 node 0.00010992635655524084
sum of discrepancies split node 4.182445361007939e-05 ratio: 2.6282795605666776
差异的总和减少了 4 到 5 倍。最大差异也好了很多。
disc = numpy.abs(got - expected).max()
disc_split = numpy.abs(got_split - expected).max()
print("max discrepancies 1 node", disc)
print("max discrepancies split node", disc_split, "ratio:", disc / disc_split)
max discrepancies 1 node 9.335552881850617e-07
max discrepancies split node 2.9892391495423e-07 ratio: 3.1230531967574557
处理时间¶
处理时间变慢了,但幅度不大。
print(
"processing time no split",
timeit.timeit(lambda: sess.run(None, {"X": X32})[0], number=150),
)
print(
"processing time split",
timeit.timeit(lambda: sess_split.run(None, {"X": X32})[0], number=150),
)
processing time no split 1.0977208300027996
processing time split 1.2429911670005822
分割影响¶
让我们看看总差异如何随参数 split 变化。
res = []
for i in tqdm([*range(20, 170, 20), 200, 300, 400, 500]):
model_onnx_split = to_onnx(
reg,
X[:1].astype(numpy.float32),
target_opset={"": 14, "ai.onnx.ml": 2},
options={"split": i},
)
sess_split = InferenceSession(
model_onnx_split.SerializeToString(), providers=["CPUExecutionProvider"]
)
got_split = sess_split.run(None, {"X": X32})[0].ravel()
disc_split = numpy.abs(got_split - expected).max()
res.append(dict(split=i, disc=disc_split))
df = DataFrame(res).set_index("split")
df["baseline"] = disc
print(df)
0%| | 0/12 [00:00<?, ?it/s]
8%|▊ | 1/12 [00:01<00:12, 1.11s/it]
17%|█▋ | 2/12 [00:02<00:10, 1.09s/it]
25%|██▌ | 3/12 [00:03<00:10, 1.15s/it]
33%|███▎ | 4/12 [00:04<00:08, 1.11s/it]
42%|████▏ | 5/12 [00:05<00:07, 1.12s/it]
50%|█████ | 6/12 [00:06<00:06, 1.12s/it]
58%|█████▊ | 7/12 [00:08<00:06, 1.31s/it]
67%|██████▋ | 8/12 [00:09<00:05, 1.33s/it]
75%|███████▌ | 9/12 [00:11<00:04, 1.36s/it]
83%|████████▎ | 10/12 [00:12<00:02, 1.28s/it]
92%|█████████▏| 11/12 [00:13<00:01, 1.31s/it]
100%|██████████| 12/12 [00:14<00:00, 1.24s/it]
100%|██████████| 12/12 [00:14<00:00, 1.23s/it]
disc baseline
split
20 1.955193e-07 9.335553e-07
40 3.277593e-07 9.335553e-07
60 3.374452e-07 9.335553e-07
80 3.948104e-07 9.335553e-07
100 2.989239e-07 9.335553e-07
120 2.703531e-07 9.335553e-07
140 3.906515e-07 9.335553e-07
160 3.629678e-07 9.335553e-07
200 4.123835e-07 9.335553e-07
300 6.290701e-07 9.335553e-07
400 5.661779e-07 9.335553e-07
500 6.868547e-07 9.335553e-07
图。
_, ax = plt.subplots(1, 1)
df.plot(
title="Sum of discrepancies against split\nsplit = number of tree per node",
ax=ax,
)
# plt.show()

脚本总运行时间: (0 分 19.930 秒)