小编pra*_*vii的帖子

Daskdistributed.scheduler - 错误 - 无法收集密钥

import joblib

from sklearn.externals.joblib import parallel_backend
with joblib.parallel_backend('dask'):
 
    from dask_ml.model_selection import GridSearchCV
    import xgboost
    from xgboost import XGBRegressor
    grid_search = GridSearchCV(estimator= XGBRegressor(), param_grid = param_grid, cv = 3, n_jobs = -1)
    grid_search.fit(df2,df3)
Run Code Online (Sandbox Code Playgroud)

我使用两台本地机器创建了一个 dask 集群

client = dask.distributed.client('tcp://191.xxx.xx.xxx:8786')
Run Code Online (Sandbox Code Playgroud)

我正在尝试使用 dask gridsearchcv 找到最佳参数。我面临以下错误。

istributed.scheduler - ERROR - Couldn't gather keys {"('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 1202, 2)": ['tcp://127.0.0.1:3738']} state: ['processing'] workers: ['tcp://127.0.0.1:3738']
NoneType: None
distributed.scheduler - ERROR - Workers don't have promised key: ['tcp://127.0.0.1:3738'], ('xgbregressor-fit-score-7cb7087b3aff75a31f487cfe5a9cedb0', 1202, 2)
NoneType: None
distributed.client - WARNING - Couldn't gather …
Run Code Online (Sandbox Code Playgroud)

python dask dask-distributed dask-ml

6
推荐指数
1
解决办法
464
查看次数

如何将数据增强后的图像保存在新文件夹中而不循环

我试图将增强图像保存在文件夹中,但循环正在执行无限次。我的文件夹中有 5000 张图像,但我获得的增强图像数量是无限的。我的目标是获得相同数量的增强图像,即 5000 张。

谢谢

import numpy as np
from keras.preprocessing.image import ImageDataGenerator

datagen = ImageDataGenerator(rotation_range=90)

image_path = 'C:/Users/1/Desktop/DEEP/Dataset/Train/1training_c10882.png'

image = np.expand_dims(imageio.imread(image_path), 0)

save_here = 'D:/Augmented DATASET/'

generator = datagen.flow_from_directory('C:/Users/1/Desktop/DEEP/Dataset/Train',target_size=(224,224),
                                    batch_size = 256, class_mode = 'binary')

for inputs,outputs in generator:
    pass
Run Code Online (Sandbox Code Playgroud)

infinite-loop python-3.x deep-learning keras data-augmentation

1
推荐指数
1
解决办法
8331
查看次数