我正在创建句子的单词表示.然后将句子中存在的单词与文件"vectors.txt"进行比较,以获得它们的嵌入向量.在获得句子中存在的每个单词的向量之后,我将对句子中单词的向量进行平均.这是我的代码:
import nltk
import numpy as np
from nltk import FreqDist
from nltk.corpus import brown
news = brown.words(categories='news')
news_sents = brown.sents(categories='news')
fdist = FreqDist(w.lower() for w in news)
vocabulary = [word for word, _ in fdist.most_common(10)]
num_sents = len(news_sents)
def averageEmbeddings(sentenceTokens, embeddingLookupTable):
listOfEmb=[]
for token in sentenceTokens:
embedding = embeddingLookupTable[token]
listOfEmb.append(embedding)
return sum(np.asarray(listOfEmb)) / float(len(listOfEmb))
embeddingVectors = {}
with open("D:\\Embedding\\vectors.txt") as file:
for line in file:
(key, *val) = line.split()
embeddingVectors[key] = val
for i in range(num_sents):
features = {} …Run Code Online (Sandbox Code Playgroud) 我想将文件导入字典以进行进一步处理.该文件包含NLP的嵌入向量.看起来像:
the 0.011384 0.010512 -0.008450 -0.007628 0.000360 -0.010121 0.004674 -0.000076
of 0.002954 0.004546 0.005513 -0.004026 0.002296 -0.016979 -0.011469 -0.009159
and 0.004691 -0.012989 -0.003122 0.004786 -0.002907 0.000526 -0.006146 -0.003058
one 0.014722 -0.000810 0.003737 -0.001110 -0.011229 0.001577 -0.007403 -0.005355
Run Code Online (Sandbox Code Playgroud)
我使用的代码是:
embeddingTable = {}
with open("D:\\Embedding\\test.txt") as f:
for line in f:
(key, val) = line.split()
d[key] = val
print(embeddingTable)
Run Code Online (Sandbox Code Playgroud)
错误:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-22-3612e9012ffe> in <module>()
24 with open("D:\\Embedding\\test.txt") as f:
25 for line in f:
---> 26 …Run Code Online (Sandbox Code Playgroud) 我有一个表格,条目太长,有1和0.例如我有桌子:
| Sent id.| BoW. |
|---------|----------|
| 1 | 10100101 |
| 2 | 00011110 |
| 3 | 10101111 |
Run Code Online (Sandbox Code Playgroud)
我想创建一个新的表格来划分列BoW.条目成一些任意长度(在这种情况下为4)并分配块号.
| Sent id.| Chunk No. | BoW. |
|---------|-----------|------|
| 1 | 1 | 1010 |
| 1 | 2 | 0101 |
| 2 | 1 | 0001 |
| 2 | 2 | 1110 |
| 3 | 1 | 1010 |
| 3 | 2 | 1111 |
Run Code Online (Sandbox Code Playgroud)
我是初学者,试图在文档中搜索,但没有成功.也许是这样的,但具有适当的功能:
CREATE TABLE Bow2 AS
SELECT …Run Code Online (Sandbox Code Playgroud) 我有一个4623行的文本文件和0s和1s字符串形式的条目(例如01010111).我逐个字符地比较它们.我有几个数据集,字符串长度为100,1000和10,000.1000小时需要25小时才能计算10,000小时需要60小时.有没有办法加快速度?我尝试使用多处理库,但它只是重复值.也许我错了.码:
f = open("/path/to/file/file.txt", 'r')
l = [s.strip('\n') for s in f]
f.close()
for a in range(0, len(l)):
for b in range(0, len(l)):
if (a < b):
result = 0
if (a == b):
result = 1
else:
counter = 0
for i in range(len(l[a])):
if (int(l[a][i]) == int(l[b][i]) == 1):
counter += 1
result = counter / 10000
print((a + 1), (b + 1), result)
Run Code Online (Sandbox Code Playgroud)
我是python的新手,所以我认为这段代码需要一些优化.任何帮助都会很好.提前致谢.