jbw*_*bwt 5 python algorithm performance similarity
我试图通过计算匹配元素的数量来计算集合与集合中所有其他集合的相似程度。获得计数后,我想对具有前 X 个(当前 100 个)相似集(计数最高的集)的每个集执行进一步的操作。我提供了一个示例输入和一个输出,其中显示了两个集合的匹配元素的计数:
{
"list1": [
"label1",
"label2",
"label3"
],
"list2": [
"label2",
"label3",
"label4"
],
"list3": [
"label3",
"label4",
"label5"
],
"list4": [
"label4",
"label5",
"label6"
]
}
Run Code Online (Sandbox Code Playgroud)
{
"list1": {
"list1": 3,
"list2": 2,
"list3": 1,
"list4": 0
},
"list2": {
"list1": 2,
"list2": 3,
"list3": 2,
"list4": 1
},
"list3": {
"list1": 1,
"list2": 2,
"list3": 3,
"list4": 2
},
"list4": {
"list1": 0,
"list2": 1,
"list3": 2,
"list4": 3
}
}
Run Code Online (Sandbox Code Playgroud)
我想出了下面的代码,但是输入大约20万组需要几个小时。集合中元素/标签的数量各不相同,但平均每个集合中约有 10 个元素。唯一标签值的总数约为 300 个。
input = {}
input['list1'] = ['label1', 'label2', 'label3']
input['list2'] = ['label2', 'label3', 'label4']
input['list3'] = ['label3', 'label4', 'label5']
input['list4'] = ['label4', 'label5', 'label6']
print(json.dumps(input, indent=2))
input = {key: set(value) for key, value in input.items()}
output = {key1: {key2: 0 for key2 in input.keys()} for key1 in input.keys()}
for key1, value1 in input.items():
for key2, value2 in input.items():
for element in value1:
if element in value2:
count = output[key1][key2]
output[key1][key2] = count + 1
print(json.dumps(output, indent=2))
Run Code Online (Sandbox Code Playgroud)
当集合数量很大时,有人对如何提高上述代码的执行时间有任何想法吗?
感谢您的任何建议!
使用倒排索引可以避免与那些交集基数为 0 的集合计算交集:
from collections import defaultdict, Counter
from itertools import chain
from pprint import pprint
data = {
"list1": ["label1", "label2", "label3"],
"list2": ["label2", "label3", "label4"],
"list3": ["label3", "label4", "label5"],
"list4": ["label4", "label5", "label6"]
}
index = defaultdict(list)
for key, values in data.items():
for value in values:
index[value].append(key)
result = {key: Counter(chain.from_iterable(index[label] for label in labels)) for key, labels in data.items()}
pprint(result)
Run Code Online (Sandbox Code Playgroud)
输出
{'list1': Counter({'list1': 3, 'list2': 2, 'list3': 1}),
'list2': Counter({'list2': 3, 'list1': 2, 'list3': 2, 'list4': 1}),
'list3': Counter({'list3': 3, 'list2': 2, 'list4': 2, 'list1': 1}),
'list4': Counter({'list4': 3, 'list3': 2, 'list2': 1})}
Run Code Online (Sandbox Code Playgroud)
如果严格需要,您可以包含具有0交集基数的那些集合,如下所示:
result = {key: {k: value.get(k, 0) for k in data} for key, value in result.items()}
pprint(result)
Run Code Online (Sandbox Code Playgroud)
输出
{'list1': {'list1': 3, 'list2': 2, 'list3': 1, 'list4': 0},
'list2': {'list1': 2, 'list2': 3, 'list3': 2, 'list4': 1},
'list3': {'list1': 1, 'list2': 2, 'list3': 3, 'list4': 2},
'list4': {'list1': 0, 'list2': 1, 'list3': 2, 'list4': 3}}
Run Code Online (Sandbox Code Playgroud)
第二种选择来自于观察,大多数时间都致力于查找集合的交集,因此更快的数据结构(例如咆哮位图)是有用的:
from collections import defaultdict
from pprint import pprint
from pyroaring import BitMap
data = {
"list1": ["label1", "label2", "label3"],
"list2": ["label2", "label3", "label4"],
"list3": ["label3", "label4", "label5"],
"list4": ["label4", "label5", "label6"]
}
# all labels
labels = set().union(*data.values())
# lookup mapping to an integer
lookup = {key: value for value, key in enumerate(labels)}
roaring_data = {key: BitMap(lookup[v] for v in value) for key, value in data.items()}
result = defaultdict(dict)
for k_out, outer in roaring_data.items():
for k_in, inner in roaring_data.items():
result[k_out][k_in] = len(outer & inner)
pprint(result)
Run Code Online (Sandbox Code Playgroud)
输出
defaultdict(<class 'dict'>,
{'list1': {'list1': 3, 'list2': 2, 'list3': 1, 'list4': 0},
'list2': {'list1': 2, 'list2': 3, 'list3': 2, 'list4': 1},
'list3': {'list1': 1, 'list2': 2, 'list3': 3, 'list4': 2},
'list4': {'list1': 0, 'list2': 1, 'list3': 2, 'list4': 3}})
Run Code Online (Sandbox Code Playgroud)
绩效分析
data上图显示了由轴值给出的长度字典的性能x,字典的每个值都是从 100 个总体中随机采样的 10 个标签的列表。与直觉相反,咆哮位图的性能比您的解决方案最差,而使用倒排索引花费的时间不到一半(大约 40%)。可以在此处找到重现上述结果的代码