cin*_*ero 11 cpu gpu python-3.x xgboost
由于我是初学者,我提前道歉。我正在尝试使用 xgb 和 XGBclassifier 使用 XGBoost 进行 GPU 与 CPU 测试。结果如下:
passed time with xgb (gpu): 0.390s
passed time with XGBClassifier (gpu): 0.465s
passed time with xgb (cpu): 0.412s
passed time with XGBClassifier (cpu): 0.421s
Run Code Online (Sandbox Code Playgroud)
我想知道为什么 CPU 的性能似乎不比 GPU 好。这是我的设置:
** 还尝试使用通过 pip 从预先构建的二进制轮子安装的 xgboost 版本:同样的问题
这是我正在使用的测试代码(从这里提取):
param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'eta':0.5, 'min_child_weight':1,
'tree_method':'gpu_hist'
}
num_round = 100
dtrain = xgb.DMatrix(X_train2, y_train)
tic = time.time()
model = xgb.train(param, dtrain, num_round)
print('passed time with xgb (gpu): %.3fs'%(time.time()-tic))
xgb_param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'learning_rate':0.5, 'min_child_weight':1,
'tree_method':'gpu_hist'}
model = xgb.XGBClassifier(**xgb_param)
tic = time.time()
model.fit(X_train2, y_train)
print('passed time with XGBClassifier (gpu): %.3fs'%(time.time()-tic))
param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'eta':0.5, 'min_child_weight':1,
'tree_method':'hist'}
num_round = 100
dtrain = xgb.DMatrix(X_train2, y_train)
tic = time.time()
model = xgb.train(param, dtrain, num_round)
print('passed time with xgb (cpu): %.3fs'%(time.time()-tic))
xgb_param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'learning_rate':0.5, 'min_child_weight':1,
'tree_method':'hist'}
model = xgb.XGBClassifier(**xgb_param)
tic = time.time()
model.fit(X_train2, y_train)
print('passed time with XGBClassifier (cpu): %.3fs'%(time.time()-tic))
Run Code Online (Sandbox Code Playgroud)
我试过结合 Sklearn 网格搜索,看看我是否会在 GPU 上获得更快的速度,但它最终比 CPU 慢得多:
passed time with XGBClassifier (gpu): 2457.510s
Best parameter (CV score=0.490):
{'xgbclass__alpha': 100, 'xgbclass__eta': 0.01, 'xgbclass__gamma': 0.2, 'xgbclass__max_depth': 5, 'xgbclass__n_estimators': 100}
passed time with XGBClassifier (cpu): 383.662s
Best parameter (CV score=0.487):
{'xgbclass__alpha': 100, 'xgbclass__eta': 0.1, 'xgbclass__gamma': 0.2, 'xgbclass__max_depth': 2, 'xgbclass__n_estimators': 20}
Run Code Online (Sandbox Code Playgroud)
我正在使用具有 75k 观测值的数据集。知道为什么我没有通过使用 GPU 获得加速吗?数据集是否太小而无法从使用 GPU 中获得收益?
任何帮助将非常感激。非常感谢!
Jer*_* M. 11
有趣的问题。正如您所注意到的,在 Github 和官方上已经注意到了一些这样的例子xgboost site
:
也有网友提出了类似的问题:
查看官方xgboost
文档,有大量关于 GPU 支持的部分。
有几件事需要检查。文档指出:
可以使用支持 CUDA 的 GPU 加速树的构建(训练)和预测。
请记住,只有某些参数才能从使用 GPU 中受益。那些是:
是的,你是。其中大部分都包含在您的超参数集中,这是一件好事。
{subsample, sampling_method, colsample_bytree, colsample_bylevel, max_bin, gamma, gpu_id, predictor, grow_policy, monotone_constraints, interaction_constraints, single_precision_histogram}
Run Code Online (Sandbox Code Playgroud)
如果您查看XGBoost 参数页面,您可以找到可能有助于改善时间的其他区域。例如,updater
可以设置为grow_gpu_hist
, which(注意,这是没有意义的,因为你已经tree_method
设置了,但是对于笔记):
grow_gpu_hist:使用 GPU 生长树。
在参数页面的底部,有额外的gpu_hist
启用参数,特别是deterministic_histogram
(注意,这是没有实际意义的,因为它默认为True
):
确定性地在 GPU 上构建直方图。由于浮点求和的非关联性,直方图构建不是确定性的。我们采用预舍入程序来缓解这个问题,这可能会导致精度略低。设置为 false 以禁用它。
我用一些数据进行了一些有趣的实验。由于我无法访问您的数据,因此我使用了sklearn
's make_classification
,它以一种相当可靠的方式生成数据。
我对您的脚本进行了一些更改,但没有发现任何变化:我更改了 gpu 与 cpu 示例的超参数,我运行了 100 次并取得了平均结果等。对我来说似乎没有什么特别突出的。我记得我曾经使用XGBoost
GPU 与 CPU 功能来加速一些分析,但是,我正在处理一个更大的数据集。
我稍微编辑了您的脚本以使用这些数据,并开始更改数据集中的samples
和数量features
(通过n_samples
和n_features
参数)以观察对运行时的影响。似乎 GPU 会显着提高高维数据的训练时间,但是具有许多样本的批量数据并没有看到巨大的改进。请参阅下面的我的脚本:
import xgboost as xgb, numpy, time
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
xgb_gpu = []
xgbclassifier_gpu = []
xgb_cpu = []
xgbclassifier_cpu = []
n_samples = 75000
n_features = 500
for i in range(len(10)):
n_samples += 10000
n_features += 300
# Make my own data since I do not have the data from the SO question
X_train2, y_train = make_classification(n_samples=n_samples, n_features=n_features*0.9, n_informative=n_features*0.1,
n_redundant=100, flip_y=0.10, random_state=8)
# Keep script from OP intact
param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'eta':0.5, 'min_child_weight':1,
'tree_method':'gpu_hist', 'gpu_id': 0
}
num_round = 100
dtrain = xgb.DMatrix(X_train2, y_train)
tic = time.time()
model = xgb.train(param, dtrain, num_round)
print('passed time with xgb (gpu): %.3fs'%(time.time()-tic))
xgb_gpu.append(time.time()-tic)
xgb_param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'learning_rate':0.5, 'min_child_weight':1,
'tree_method':'gpu_hist', 'gpu_id':0}
model = xgb.XGBClassifier(**xgb_param)
tic = time.time()
model.fit(X_train2, y_train)
print('passed time with XGBClassifier (gpu): %.3fs'%(time.time()-tic))
xgbclassifier_gpu.append(time.time()-tic)
param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'eta':0.5, 'min_child_weight':1,
'tree_method':'hist'}
num_round = 100
dtrain = xgb.DMatrix(X_train2, y_train)
tic = time.time()
model = xgb.train(param, dtrain, num_round)
print('passed time with xgb (cpu): %.3fs'%(time.time()-tic))
xgb_cpu.append(time.time()-tic)
xgb_param = {'max_depth':5, 'objective':'binary:logistic', 'subsample':0.8,
'colsample_bytree':0.8, 'learning_rate':0.5, 'min_child_weight':1,
'tree_method':'hist'}
model = xgb.XGBClassifier(**xgb_param)
tic = time.time()
model.fit(X_train2, y_train)
print('passed time with XGBClassifier (cpu): %.3fs'%(time.time()-tic))
xgbclassifier_cpu.append(time.time()-tic)
import pandas as pd
df = pd.DataFrame({'XGB GPU': xgb_gpu, 'XGBClassifier GPU': xgbclassifier_gpu, 'XGB CPU': xgb_cpu, 'XGBClassifier CPU': xgbclassifier_cpu})
#df.to_csv('both_results.csv')
Run Code Online (Sandbox Code Playgroud)
我在相同的数据集上单独和一起更改每个(样本、特征)。见下面的结果:
| Interval | XGB GPU | XGBClassifier GPU | XGB CPU | XGBClassifier CPU | Metric |
|:--------:|:--------:|:-----------------:|:--------:|:-----------------:|:----------------:|
| 0 | 11.3801 | 12.00785 | 15.20124 | 15.48131 | Changed Features |
| 1 | 15.67674 | 16.85668 | 20.63819 | 22.12265 | Changed Features |
| 2 | 18.76029 | 20.39844 | 33.23108 | 32.29926 | Changed Features |
| 3 | 23.147 | 24.91953 | 47.65588 | 44.76052 | Changed Features |
| 4 | 27.42542 | 29.48186 | 50.76428 | 55.88155 | Changed Features |
| 5 | 30.78596 | 33.03594 | 71.4733 | 67.24275 | Changed Features |
| 6 | 35.03331 | 37.74951 | 77.68997 | 75.61216 | Changed Features |
| 7 | 39.13849 | 42.17049 | 82.95307 | 85.83364 | Changed Features |
| 8 | 42.55439 | 45.90751 | 92.33368 | 96.72809 | Changed Features |
| 9 | 46.89023 | 50.57919 | 105.8298 | 107.3893 | Changed Features |
| 0 | 7.013227 | 7.303488 | 6.998254 | 9.733574 | No Changes |
| 1 | 6.757523 | 7.302388 | 5.714839 | 6.805287 | No Changes |
| 2 | 6.753428 | 7.291906 | 5.899611 | 6.603533 | No Changes |
| 3 | 6.749848 | 7.293555 | 6.005773 | 6.486256 | No Changes |
| 4 | 6.755352 | 7.297607 | 5.982163 | 8.280619 | No Changes |
| 5 | 6.756498 | 7.335412 | 6.321188 | 7.900422 | No Changes |
| 6 | 6.792402 | 7.332112 | 6.17904 | 6.443676 | No Changes |
| 7 | 6.786584 | 7.311666 | 7.093638 | 7.811417 | No Changes |
| 8 | 6.7851 | 7.30604 | 5.574762 | 6.045969 | No Changes |
| 9 | 6.789152 | 7.309363 | 5.751018 | 6.213471 | No Changes |
| 0 | 7.696765 | 8.03615 | 6.175457 | 6.764809 | Changed Samples |
| 1 | 7.914885 | 8.646722 | 6.997217 | 7.598789 | Changed Samples |
| 2 | 8.489555 | 9.2526 | 6.899783 | 7.202334 | Changed Samples |
| 3 | 9.197605 | 10.02934 | 7.511708 | 7.724675 | Changed Samples |
| 4 | 9.73642 | 10.64056 | 7.918493 | 8.982463 | Changed Samples |
| 5 | 10.34522 | 11.31103 | 8.524865 | 9.403711 | Changed Samples |
| 6 | 10.94025 | 11.98357 | 8.697257 | 9.49277 | Changed Samples |
| 7 | 11.80717 | 12.93195 | 8.734307 | 10.79595 | Changed Samples |
| 8 | 12.18282 | 13.38646 | 9.175231 | 10.33532 | Changed Samples |
| 9 | 13.05499 | 14.33106 | 11.04398 | 10.50722 | Changed Samples |
| 0 | 12.43683 | 13.19787 | 12.80741 | 13.86206 | Changed Both |
| 1 | 18.59139 | 20.01569 | 25.61141 | 35.37391 | Changed Both |
| 2 | 24.37475 | 26.44214 | 40.86238 | 42.79259 | Changed Both |
| 3 | 31.96762 | 34.75215 | 68.869 | 59.97797 | Changed Both |
| 4 | 41.26578 | 44.70537 | 83.84672 | 94.62811 | Changed Both |
| 5 | 49.82583 | 54.06252 | 109.197 | 108.0314 | Changed Both |
| 6 | 59.36528 | 64.60577 | 131.1234 | 140.6352 | Changed Both |
| 7 | 71.44678 | 77.71752 | 156.1914 | 161.4897 | Changed Both |
| 8 | 81.79306 | 90.56132 | 196.0033 | 193.4111 | Changed Both |
| 9 | 94.71505 | 104.8044 | 215.0758 | 224.6175 | Changed Both |
Run Code Online (Sandbox Code Playgroud)
随着我开始研究更多;这是有道理的。众所周知,GPU 可以很好地处理高维数据,如果您的数据是高维数据,那么您会看到训练时间的改善是有道理的。请参阅以下示例:
虽然我们不能确定无法访问您的数据,但当您的数据支持 GPU 的硬件功能时,它似乎可以显着提高性能,而且鉴于您的数据的大小和形状,情况似乎并非如此有。
小智 -3
。选择 CPU 与 GPU
神经网络的复杂性还取决于输入特征的数量,而不仅仅是隐藏层中单元的数量。如果您的隐藏层有 50 个单元,并且数据集中的每个观测值有 4 个输入特征,那么您的网络很小(约 200 个参数)。如果每个观察都有 5M 个输入特征(如在某些大型上下文中需要处理),那么您的网络在参数数量方面相当大。
根据我的观察,上面有一些参数需要处理,因此在 GPU 中需要花费大量时间
就我个人的经历来说:
我曾经使用 CNN 算法训练一些图像,以便在 GPU 和 CPU 中进行预测。CPU 在完整数据集上生成训练模型所需的处理时间很短,但 GPU 需要更多处理时间
归档时间: |
|
查看次数: |
1971 次 |
最近记录: |