为什么 Dask 不遵守 LocalCluster 的内存限制?

jcf*_*cco 5 python k-means scikit-learn dask dask-ml

我正在一台具有 16GB RAM 的机器上运行下面粘贴的代码(故意)。

import dask.array as da
import dask.delayed
from sklearn.datasets import make_blobs
import numpy as np

from dask_ml.cluster import KMeans
from dask.distributed import Client

client = Client(n_workers=4, threads_per_worker=1, processes=False,
                memory_limit='2GB', scheduler_port=0,
                silence_logs=False, dashboard_address=8787)

n_centers = 12
n_features = 4

X_small, y_small = make_blobs(n_samples=1000, centers=n_centers, n_features=n_features, random_state=0)

centers = np.zeros((n_centers, n_features))

for i in range(n_centers):
    centers[i] = X_small[y_small == i].mean(0)

print(centers)

n_samples_per_block = 450 * 650 * 900
n_blocks = 4

delayeds = [dask.delayed(make_blobs)(n_samples=n_samples_per_block,
                                     centers=centers,
                                     n_features=n_features,
                                     random_state=i)[0]
            for i in range(n_blocks)]
arrays = [da.from_delayed(obj, shape=(n_samples_per_block, n_features), dtype=X_small.dtype)
          for obj in delayeds]
X = da.concatenate(arrays)

print(X)

X = X.rechunk((1000, 4))

clf = KMeans(init_max_iter=3, oversampling_factor=10)

clf.fit(X)

client.close()
Run Code Online (Sandbox Code Playgroud)

考虑到我正在创建 4 个工作线程,内存限制为 2 GB(总共 8 GB),我希望看到该算法不超过该机器的内存量。不幸的是,它使用了超过 16 GB 的内存和交换空间。

如果我误解了 Dask 的概念,我真的不知道该代码有什么问题(特别是因为该代码在数据依赖性方面没有任何复杂性)。

Sul*_*yev 3

这并不是对dask不尊重内存约束问题的直接答案(简短的答案似乎是这不是绑定约束),但是可以沿着以下方向改进代码:

  • 使用make_blobs由以下内容改编的dask_ml:这减少了由于构建 dask 数组和相关重塑而产生的开销;
  • 使用上下文管理器来创建客户端(和集群):这将有更好的处理.close,特别是如果在工作线程上执行的代码中有错误。
from dask.distributed import Client
from dask_ml.cluster import KMeans
from dask_ml.datasets import make_blobs

client_params = dict(
    n_workers=4,
    threads_per_worker=1,
    processes=False,
    memory_limit="2GB",
    scheduler_port=0,
    silence_logs=False,
    dashboard_address=8787,
)

n_centers = 12
n_features = 4
n_samples = 1000 * 100
chunks = (1000 * 50, 4)

X, _ = make_blobs(
    n_samples=n_samples,
    centers=n_centers,
    n_features=n_features,
    random_state=0,
    chunks=chunks,
)

clf = KMeans(init_max_iter=3, oversampling_factor=10, n_clusters=n_centers)

with Client(**client_params) as client:
    result = clf.fit(X)

print(result.cluster_centers_)
Run Code Online (Sandbox Code Playgroud)