Mos*_*ber 2 python algorithm data-structures
我创建了一系列的各种单词来存储英语词典中的所有单词(而不是定义).关键在于我可以获得仅包含给定范围内的字母的所有单词.
包含所有单词的文本文件大约为2.7 mb,但在创建树并使用pickle将其写入文件后,文件大于33 mb.
这种尺寸差异来自哪里?我以为我会通过不需要为不同的单词存储相同字母的多个副本来节省空间,例如对于单词app和apple我只需要5个节点,对于 - > p - > p - > l - > e .
我的代码如下:
import pickle
class WordTrieNode:
def __init__(self, nodeLetter='', parentNode=None, isWordEnding=False):
self.nodeLetter = nodeLetter
self.parentNode = parentNode
self.isWordEnding = isWordEnding
self.children = [None]*26 # One entry for each lowercase letter of the alphabet
def getWord(self):
if(self.parentNode is None):
return ''
return self.parentNode.getWord() + self.nodeLetter
def isEndOfWord(self):
return self.isWordEnding
def markEndOfWord():
self.isWordEnding = True
def insertWord(self, word):
if(len(word) == 0):
return
char = word[0]
idx = ord(char) - ord('a')
if(len(word) == 1):
if(self.children[idx] is None):
node = WordTrieNode(char, self, True)
self.children[idx] = node
else:
self.children[idx].markEndOfWord()
else:
if(self.children[idx] is None):
node = WordTrieNode(char, self, False)
self.children[idx] = node
self.children[idx].insertWord(word[1:])
else:
self.children[idx].insertWord(word[1:])
def getAllWords(self):
for node in self.children:
if node is not None:
if node.isEndOfWord():
print(node.getWord())
node.getAllWords()
def getAllWordsInRange(self, low='a', high='z'):
i = ord(low) - ord('a')
j = ord(high) - ord('a')
for node in self.children[i:j+1]:
if node is not None:
if node.isEndOfWord():
print(node.getWord())
node.getAllWordsInRange(low, high)
def main():
tree = WordTrieNode("", None, False)
with open('en.txt') as file:
for line in file:
tree.insertWord(line.strip('\n'))
with open("treeout", 'wb') as output:
pickle.dump(tree, output, pickle.HIGHEST_PROTOCOL)
#tree.getAllWordsInRange('a', 'l')
#tree.getAllWords()
if __name__ == "__main__":
main()
Run Code Online (Sandbox Code Playgroud)
trie的节点很大,因为它们存储了所有可能的下一个字母的链接.正如您在代码中看到的,每个节点都包含26个链接(子节点)的列表.
可以采用更紧凑的方案(https://en.wikipedia.org/wiki/Trie#Compressing_tries),但代价是更复杂,速度更慢.