查找最近邻更改算法

7 python algorithm k-means tensorflow

我正在创建一个推荐系统,向用户推荐 20 首最合适的歌曲。我已经训练了我的模型,我已经准备好为给定的播放列表推荐歌曲了!但是,我遇到的一个问题是,我需要嵌入该新播放列表,以便使用 kmeans 在该嵌入空间中找到最接近的相关播放列表。

为了推荐歌曲,我首先将所有训练播放列表的学习嵌入进行聚类,然后为我的给定测试播放列表选择“邻居”播放列表作为同一集群中的所有其他播放列表。然后我从这些播放列表中获取所有曲目,并将测试播放列表嵌入和这些“相邻”曲目输入到我的模型中进行预测。这根据它们在给定测试播放列表中接下来出现的可能性(在我的模型下)对“相邻”曲目进行排名。

desired_user_id = 123
model_path = Path(PATH, 'model.h5')
print('using model: %s' % model_path)
model =keras.models.load_model(model_path)
print('Loaded model!')

mlp_user_embedding_weights = (next(iter(filter(lambda x: x.name == 'mlp_user_embedding', model.layers))).get_weights())

# get the latent embedding for your desired user
user_latent_matrix = mlp_user_embedding_weights[0]
one_user_vector = user_latent_matrix[desired_user_id,:]
one_user_vector = np.reshape(one_user_vector, (1,32))

print('\nPerforming kmeans to find the nearest users/playlists...')
# get 100 similar users
kmeans = KMeans(n_clusters=100, random_state=0, verbose=0).fit(user_latent_matrix)
desired_user_label = kmeans.predict(one_user_vector)
user_label = kmeans.labels_
neighbors = []
for user_id, user_label in enumerate(user_label):
    if user_label == desired_user_label:
        neighbors.append(user_id)
print('Found {0} neighbor users/playlists.'.format(len(neighbors)))

tracks = []
for user_id in neighbors:
    tracks += list(df[df['pid'] == int(user_id)]['trackindex'])
print('Found {0} neighbor tracks from these users.'.format(len(tracks))) 

users = np.full(len(tracks), desired_user_id, dtype='int32')
items = np.array(tracks, dtype='int32')

# and predict tracks for my user
results = model.predict([users,items],batch_size=100, verbose=0) 
results = results.tolist()
print('Ranked the tracks!')

results_df = pd.DataFrame(np.nan, index=range(len(results)), columns=['probability','track_name', 'track artist'])
print(results_df.shape)

# loop through and get the probability (of being in the playlist according to my model), the track, and the track's artist 
for i, prob in enumerate(results):
    results_df.loc[i] = [prob[0], df[df['trackindex'] == i].iloc[0]['track_name'], df[df['trackindex'] == i].iloc[0]['artist_name']]
results_df = results_df.sort_values(by=['probability'], ascending=False)

results_df.head(20)
Run Code Online (Sandbox Code Playgroud)

我想使用这个https://www.tensorflow.org/recommenders/examples/basic_retrieval#building_a_candidate_ann_index或来自 Spotify https://github.com/spotify/annoy的官方 GitHub 存储库,而不是上面的代码。不幸的是,我不知道如何使用它,因此新程序为我提供了用户最流行的 20 首曲目。我该如何改变这个?


编辑

我试过的:

from annoy import AnnoyIndex
import random
desired_user_id = 123
model_path = Path(PATH, 'model.h5')
print('using model: %s' % model_path)
model =keras.models.load_model(model_path)
print('Loaded model!')
    
mlp_user_embedding_weights = (next(iter(filter(lambda x: x.name == 'mlp_user_embedding', model.layers))).get_weights())
    
# get the latent embedding for your desired user
user_latent_matrix = mlp_user_embedding_weights[0]
one_user_vector = user_latent_matrix[desired_user_id,:]
one_user_vector = np.reshape(one_user_vector, (1,32))

t = AnnoyIndex(desired_user_id , one_user_vector)  #Length of item vector that will be indexed
for i in range(1000):
    v = [random.gauss(0, 1) for z in range(f)]
    t.add_item(i, v)

t.build(10) # 10 trees
t.save('test.ann')

u = AnnoyIndex(desired_user_id , one_user_vector)
u.load('test.ann') # super fast, will just mmap the file
print(u.get_nns_by_item(0, 1000)) # will find the 1000 nearest neighbors
# Now how to I get the probability and the values? 
Run Code Online (Sandbox Code Playgroud)

Ger*_*erd 1

在现有代码中,您实际上似乎有两个预测步骤:一个用于查找 100 个最近邻居用户,另一个用于对所有这些用户相对于当前用户的所有轨迹进行排名。现在,一般来说,您应该决定要用 Annoy 算法替换这些步骤中的哪一个(或两个步骤)。

查看 GitHub 上的示例代码,您不需要t = AnnoyIndex ...此处的部分,这只是创建一些示例数据来显示用法。

u = AnnoyIndex(f, metric)需要维度数作为输入参数f以及值"angular""euclidean""manhattan""hamming""dot"之一作为metric。从您的问题中,我无法判断您的情况的维度数是多少,您可能必须自己尝试该指标才能找出最佳结果。

之后,您必须将数据导入到 AnnoyIndex 对象中,该对象可能必须源自和user_latent_matrix/或users/ items

最后,您应该能够通过运行相关用户或轨迹的 id 来检索 20 个最近的u.get_nns_by_item(i, 20)邻居i。设置include_distances=True还将为您提供相应的距离(而不是您的方法中的概率)。

希望这能给您一些前进的提示。