car*_*lie 4 python knn scikit-learn
我正在做一项作业,我需要使用 sklearn 库进行 KNN 回归——但是,如果我丢失了数据(假设它是随机丢失的),我不应该归咎于它。相反,我必须将其保留为 null 并以某种方式在我的代码帐户中忽略其中一个值为 null 的比较。
例如,如果我的观察是 (1, 2, 3, 4, null, 6) 和 (1, null, 3, 4, 5, 6) 那么我将忽略第二个和第五个观察。
这可能与 sklearn 库有关吗?
ETA:我只会删除空值,但我不知道他们将要测试的数据是什么样的,最终可能会下降 0% 到 99% 的数据。
这在一定程度上取决于您究竟要做什么。
[1, 2, 3, 4, None, 6]和[1, None, 3, 4, 5, 6]is之间的距离之类的东西sqrt(1*1 + 3*3 + 4*4 + 6*6)。在这种情况下,您需要某种 sklearn 支持的自定义指标。不幸的是,您无法将空值输入到 KNNfit()方法中,因此即使使用自定义指标,您也无法完全得到您想要的。解决方案是预先计算距离。例如:from math import sqrt, isfinite
X_train = [
[1, 2, 3, 4, None, 6],
[1, None, 3, 4, 5, 6],
]
y_train = [3.14, 2.72] # we're regressing something
def euclidean(p, q):
# Could also use numpy routines
return sqrt(sum((x-y)**2 for x,y in zip(p,q)))
def is_num(x):
# The `is not None` check needs to happen first because of short-circuiting
return x is not None and isfinite(x)
def restricted_points(p, q):
# Returns copies of `p` and `q` except at coordinates where either vector
# is None, inf, or nan
return tuple(zip(*[(x,y) for x,y in zip(p,q) if all(map(is_num, (x,y)))]))
def dist(p, q):
# Note that in this form you can use any metric you like on the
# restricted vectors, not just the euclidean metric
return euclidean(*restricted_points(p, q))
dists = [[dist(p,q) for p in X_train] for q in X_train]
knn = KNeighborsRegressor(
n_neighbors=1, # only needed in our test example since we have so few data points
metric='precomputed'
)
knn.fit(dists, y_train)
X_test = [
[1, 2, 3, None, None, 6],
]
# We tell sklearn which points in the knn graph to use by telling it how far
# our queries are from every input. This is super inefficient.
predictions = knn.predict([[dist(q, p) for p in X_train] for q in X_test])
Run Code Online (Sandbox Code Playgroud)
如果您要回归到的输出中有空值,该怎么办仍然是一个悬而未决的问题,但是您的问题陈述并没有让它听起来像是您的问题。
| 归档时间: |
|
| 查看次数: |
710 次 |
| 最近记录: |