mat*_*fux 10 python hash grouping numpy lidar
我正在使用激光雷达的 3D 点云。这些点由 numpy 数组给出,如下所示:
points = np.array([[61651921, 416326074, 39805], [61605255, 416360555, 41124], [61664810, 416313743, 39900], [61664837, 416313749, 39910], [61674456, 416316663, 39503], [61651933, 416326074, 39802], [61679969, 416318049, 39500], [61674494, 416316677, 39508], [61651908, 416326079, 39800], [61651908, 416326087, 39802], [61664845, 416313738, 39913], [61674480, 416316668, 39503], [61679996, 416318047, 39510], [61605290, 416360572, 41118], [61605270, 416360565, 41122], [61683939, 416313004, 41052], [61683936, 416313033, 41060], [61679976, 416318044, 39509], [61605279, 416360555, 41109], [61664837, 416313739, 39915], [61674487, 416316666, 39505], [61679961, 416318035, 39503], [61683943, 416313004, 41054], [61683930, 416313042, 41059]])
Run Code Online (Sandbox Code Playgroud)
我想将我的数据分组到大小的立方体中,50*50*50以便每个立方体都保留一些points它包含的可散列索引和 numpy 索引。为了进行拆分,我将cubes = points \\ 50哪些输出分配给:
cubes = np.array([[1233038, 8326521, 796], [1232105, 8327211, 822], [1233296, 8326274, 798], [1233296, 8326274, 798], [1233489, 8326333, 790], [1233038, 8326521, 796], [1233599, 8326360, 790], [1233489, 8326333, 790], [1233038, 8326521, 796], [1233038, 8326521, 796], [1233296, 8326274, 798], [1233489, 8326333, 790], [1233599, 8326360, 790], [1232105, 8327211, 822], [1232105, 8327211, 822], [1233678, 8326260, 821], [1233678, 8326260, 821], [1233599, 8326360, 790], [1232105, 8327211, 822], [1233296, 8326274, 798], [1233489, 8326333, 790], [1233599, 8326360, 790], [1233678, 8326260, 821], [1233678, 8326260, 821]])
Run Code Online (Sandbox Code Playgroud)
{(1232105, 8327211, 822): [1, 13, 14, 18]),
(1233038, 8326521, 796): [0, 5, 8, 9],
(1233296, 8326274, 798): [2, 3, 10, 19],
(1233489, 8326333, 790): [4, 7, 11, 20],
(1233599, 8326360, 790): [6, 12, 17, 21],
(1233678, 8326260, 821): [15, 16, 22, 23]}
Run Code Online (Sandbox Code Playgroud)
我的真实点云包含多达几亿个 3D 点。进行这种分组的最快方法是什么?
我已经尝试了大多数各种解决方案。这是假设点的大小约为 2000 万且不同立方体的大小约为 100 万的时间消耗比较:
import pandas as pd
print(pd.DataFrame(cubes).groupby([0,1,2]).indices)
#takes 9sec
Run Code Online (Sandbox Code Playgroud)
#thanks @abc:
result = defaultdict(list)
for idx, elem in enumerate(cubes):
result[elem.tobytes()].append(idx) # takes 20.5sec
# result[elem[0], elem[1], elem[2]].append(idx) #takes 27sec
# result[tuple(elem)].append(idx) # takes 50sec
Run Code Online (Sandbox Code Playgroud)
# thanks @Eelco Hoogendoorn for his library
values = npi.group_by(cubes).split(np.arange(len(cubes)))
result = dict(enumerate(values))
# takes 9.8sec
Run Code Online (Sandbox Code Playgroud)
# thanks @Divakar for showing numexpr library:
import numexpr as ne
def dimensionality_reduction(cubes):
#cubes = cubes - np.min(cubes, axis=0) #in case some coords are negative
cubes = cubes.astype(np.int64)
s0, s1 = cubes[:,0].max()+1, cubes[:,1].max()+1
d = {'s0':s0,'s1':s1,'c0':cubes[:,0],'c1':cubes[:,1],'c2':cubes[:,2]}
c1D = ne.evaluate('c0+c1*s0+c2*s0*s1',d)
return c1D
cubes = dimensionality_reduction(cubes)
result = pd.DataFrame(cubes).groupby([0]).indices
# takes 2.5 seconds
Run Code Online (Sandbox Code Playgroud)
可以在这里下载cubes.npz文件并使用命令
cubes = np.load('cubes.npz')['array']
Run Code Online (Sandbox Code Playgroud)
检查性能时间。
我们可以执行dimensionality-reduction以减少cubes到一维数组。这是基于将给定的立方体数据映射到 n-dim 网格以计算线性索引等价物,详细讨论here。然后,基于这些线性索引的唯一性,我们可以分离唯一组及其对应的索引。因此,按照这些策略,我们会有一个解决方案,就像这样——
N = 4 # number of indices per group
c1D = np.ravel_multi_index(cubes.T, cubes.max(0)+1)
sidx = c1D.argsort()
indices = sidx.reshape(-1,N)
unq_groups = cubes[indices[:,0]]
# If you need in a zipped dictionary format
out = dict(zip(map(tuple,unq_groups), indices))
Run Code Online (Sandbox Code Playgroud)
备选方案#1:如果整数值cubes太大,我们可能希望dimensionality-reduction选择范围较短的维度作为主轴。因此,对于这些情况,我们可以修改缩减步骤以获得c1D,就像这样 -
s1,s2 = cubes[:,:2].max(0)+1
s = np.r_[s2,1,s1*s2]
c1D = cubes.dot(s)
Run Code Online (Sandbox Code Playgroud)
接下来,我们可以使用Cython-powered kd-tree快速最近邻查找来获得最近的相邻索引,从而像这样解决我们的情况 -
from scipy.spatial import cKDTree
idx = cKDTree(cubes).query(cubes, k=N)[1] # N = 4 as discussed earlier
I = idx[:,0].argsort().reshape(-1,N)[:,0]
unq_groups,indices = cubes[I],idx[I]
Run Code Online (Sandbox Code Playgroud)
我们将通过一些拆分扩展基于 argsort 的方法以获得我们想要的输出,就像这样 -
c1D = np.ravel_multi_index(cubes.T, cubes.max(0)+1)
sidx = c1D.argsort()
c1Ds = c1D[sidx]
split_idx = np.flatnonzero(np.r_[True,c1Ds[:-1]!=c1Ds[1:],True])
grps = cubes[sidx[split_idx[:-1]]]
indices = [sidx[i:j] for (i,j) in zip(split_idx[:-1],split_idx[1:])]
# If needed as dict o/p
out = dict(zip(map(tuple,grps), indices))
Run Code Online (Sandbox Code Playgroud)
使用 1D 版本的组cubes作为键
我们将使用 group of cubesas 键扩展前面列出的方法,以简化字典创建过程并使其高效,就像这样 -
def numpy1(cubes):
c1D = np.ravel_multi_index(cubes.T, cubes.max(0)+1)
sidx = c1D.argsort()
c1Ds = c1D[sidx]
mask = np.r_[True,c1Ds[:-1]!=c1Ds[1:],True]
split_idx = np.flatnonzero(mask)
indices = [sidx[i:j] for (i,j) in zip(split_idx[:-1],split_idx[1:])]
out = dict(zip(c1Ds[mask[:-1]],indices))
return out
Run Code Online (Sandbox Code Playgroud)
接下来,我们将使用numba包来迭代并获得最终的可哈希字典输出。随之而来,将有两种解决方案 - 一种分别使用获取键和值numba,主调用将压缩并转换为 dict,而另一种将创建一个numba-supporteddict 类型,因此主调用函数不需要额外的工作.
因此,我们将有第一个numba解决方案:
from numba import njit
@njit
def _numba1(sidx, c1D):
out = []
n = len(sidx)
start = 0
grpID = []
for i in range(1,n):
if c1D[sidx[i]]!=c1D[sidx[i-1]]:
out.append(sidx[start:i])
grpID.append(c1D[sidx[start]])
start = i
out.append(sidx[start:])
grpID.append(c1D[sidx[start]])
return grpID,out
def numba1(cubes):
c1D = np.ravel_multi_index(cubes.T, cubes.max(0)+1)
sidx = c1D.argsort()
out = dict(zip(*_numba1(sidx, c1D)))
return out
Run Code Online (Sandbox Code Playgroud)
第二种numba解决方案为:
from numba import types
from numba.typed import Dict
int_array = types.int64[:]
@njit
def _numba2(sidx, c1D):
n = len(sidx)
start = 0
outt = Dict.empty(
key_type=types.int64,
value_type=int_array,
)
for i in range(1,n):
if c1D[sidx[i]]!=c1D[sidx[i-1]]:
outt[c1D[sidx[start]]] = sidx[start:i]
start = i
outt[c1D[sidx[start]]] = sidx[start:]
return outt
def numba2(cubes):
c1D = np.ravel_multi_index(cubes.T, cubes.max(0)+1)
sidx = c1D.argsort()
out = _numba2(sidx, c1D)
return out
Run Code Online (Sandbox Code Playgroud)
cubes.npz数据时间-
In [4]: cubes = np.load('cubes.npz')['array']
In [5]: %timeit numpy1(cubes)
...: %timeit numba1(cubes)
...: %timeit numba2(cubes)
2.38 s ± 14.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
2.13 s ± 25.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.8 s ± 5.95 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Run Code Online (Sandbox Code Playgroud)
替代方案#1:我们可以通过numexpr大型数组的计算实现进一步的加速c1D,就像这样 -
import numexpr as ne
s0,s1 = cubes[:,0].max()+1,cubes[:,1].max()+1
d = {'s0':s0,'s1':s1,'c0':cubes[:,0],'c1':cubes[:,1],'c2':cubes[:,2]}
c1D = ne.evaluate('c0+c1*s0+c2*s0*s1',d)
Run Code Online (Sandbox Code Playgroud)
这将适用于所有需要c1D.
您可能只是迭代并将每个元素的索引添加到相应的列表中。
from collections import defaultdict
res = defaultdict(list)
for idx, elem in enumerate(cubes):
#res[tuple(elem)].append(idx)
res[elem.tobytes()].append(idx)
Run Code Online (Sandbox Code Playgroud)
通过使用tobytes()而不是将键转换为元组,可以进一步改进运行时。