如何加快numpy数组和非常大的矩阵之间的余弦相似度?

aja*_*ahu 5 python cuda gpu cosine-similarity numba

我有一个问题,需要cosine similarities在形状 (1, 300) 的 numpy 数组和形状 (5000000, 300) 的矩阵之间进行计算。我尝试了多种不同风格的代码,现在我想知道是否有办法大幅减少运行时间:

版本 1:我将我的大矩阵分成 5 个大小为 1Mil 的较小矩阵:

from scipy import spatial
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor

def cos_matrix_multiplication(vector,matrix_1):

    v = vector.reshape(1, -1)
    scores1=spatial.distance.cdist(matrix_1, v, 'cosine')

    return((scores1[:1]))

pool = ThreadPoolExecutor(8)


URLS=[mat_small1,mat_small2,mat_small3,mat_small4,mat_small5]

neighbors=[]
with concurrent.futures.ThreadPoolExecutor(max_workers=30) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(cos_matrix_multiplication,vec,mat_col): mat_col for mat_col in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        data = future.result()
        neighbors.append(data)
Run Code Online (Sandbox Code Playgroud)

运行时间:2.48 秒

版本 2:使用 Numba jit:受此SO 答案的启发

@numba.jit('void(f4, f4)',nogil=True)
def cosine_sim(A,B):
    scores = np.zeros(A.shape[0])
    for i in range(A.shape[0]):
        v = A[i]
        m = B.shape[1]
        udotv = 0
        u_norm = 0
        v_norm = 0
    for j in range(m):


        udotv += B[0][j] * v[j]
        u_norm += B[0][j] * B[0][j]
        v_norm += v[j] * v[j]

    ratio =  udotv/((u_norm*v_norm)**0.5)
    scores[i] = ratio
    i += 1
return scores

cosine_sim(matrix,vec)
Run Code Online (Sandbox Code Playgroud)

运行时间 2.34 秒

版本 3:使用 Cuda jit(每次都无法真正以可重现的方式工作)

@cuda.jit
def cosine_sim(A,B,C):
#scores = np.zeros(A.shape[0])
    for i in range(A.shape[0]):
        v = A[i]
        m = B.shape[1]
        udotv = 0
        u_norm = 0
        v_norm = 0
        for j in range(m):

            udotv += B[0][j] * v[j]
            u_norm += B[0][j] * B[0][j]
            v_norm += v[j] * v[j]

    u_norm = math.sqrt(u_norm)
    v_norm = math.sqrt(v_norm)


    if (u_norm == 0) or (v_norm == 0):
        ratio = 1.0
    else:
        ratio = udotv / (u_norm * v_norm)
    C[i,1] = ratio
    i += 1


matrix = mat_small1

A_global_mem = cuda.to_device(matrix)
B_global_mem = cuda.to_device(vec)

C_global_mem = cuda.device_array((matrix.shape[0], 1))


threadsperblock = (16, 16)
blockspergrid_x = int(math.ceil(A_global_mem.shape[0] / threadsperblock[0]))
blockspergrid_y = int(math.ceil(B_global_mem.shape[1] / threadsperblock[1]))
blockspergrid = (blockspergrid_x, blockspergrid_y)


cosine_sim[blockspergrid, threadsperblock](A_global_mem, B_global_mem, C_global_mem)


C = C_global_mem.copy_to_host()
Run Code Online (Sandbox Code Playgroud)

结果是 : CudaAPIError: [702] Call to cuMemcpyDtoH results in CUDA_ERROR_LAUNCH_TIMEOUT

矩阵很密集,我的 GPU 是 8gb ram,矩阵的总大小约为 4.7gb。GPU 可以加快速度吗?

Ana*_*eev 2

请尝试将 ThreadPoolExecutor 替换为 ProcessPoolExecutor (您已经声明了它)。前一个用于异步调用,而不是用于 CPU 密集型任务,尽管文档中没有直接指定这一点。