oro*_*ome 5 python analytics nlp r scikit-learn
我希望使用Python以与在R中相同的方式预处理文档语料库.例如,给定初始语料库corpus
,我想最终得到一个预处理语料库,该语料库对应于使用以下R生成的语料库.码:
library(tm)
library(SnowballC)
corpus = tm_map(corpus, tolower)
corpus = tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, c("myword", stopwords("english")))
corpus = tm_map(corpus, stemDocument)
Run Code Online (Sandbox Code Playgroud)
是否有一个简单或直接 - 最好是预先构建 - 在Python中执行此操作的方法?有没有办法确保完全相同的结果?
例如,我想预处理
@Apple耳塞是令人惊叹的!我从未有过的入耳式耳机的最佳声音!
成
ear pod amaz最好的声音inear headphon我曾经
nltk
在预处理步骤之间和tm
在预处理步骤中获得完全相同的东西似乎很棘手,所以我认为最好的方法是rpy2
在 R 中运行预处理并将结果提取到 python 中:
import rpy2.robjects as ro
preproc = [x[0] for x in ro.r('''
tweets = read.csv("tweets.csv", stringsAsFactors=FALSE)
library(tm)
library(SnowballC)
corpus = Corpus(VectorSource(tweets$Tweet))
corpus = tm_map(corpus, tolower)
corpus = tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, c("apple", stopwords("english")))
corpus = tm_map(corpus, stemDocument)''')]
Run Code Online (Sandbox Code Playgroud)
然后,您可以将其加载到- 要使和scikit-learn
之间的内容匹配,您唯一需要做的就是删除长度小于 3 的项:CountVectorizer
DocumentTermMatrix
from sklearn.feature_extraction.text import CountVectorizer
def mytokenizer(x):
return [y for y in x.split() if len(y) > 2]
# Full document-term matrix
cv = CountVectorizer(tokenizer=mytokenizer)
X = cv.fit_transform(preproc)
X
# <1181x3289 sparse matrix of type '<type 'numpy.int64'>'
# with 8980 stored elements in Compressed Sparse Column format>
# Sparse terms removed
cv2 = CountVectorizer(tokenizer=mytokenizer, min_df=0.005)
X2 = cv2.fit_transform(preproc)
X2
# <1181x309 sparse matrix of type '<type 'numpy.int64'>'
# with 4669 stored elements in Compressed Sparse Column format>
Run Code Online (Sandbox Code Playgroud)
让我们验证一下它是否与 R 匹配:
tweets = read.csv("tweets.csv", stringsAsFactors=FALSE)
library(tm)
library(SnowballC)
corpus = Corpus(VectorSource(tweets$Tweet))
corpus = tm_map(corpus, tolower)
corpus = tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, c("apple", stopwords("english")))
corpus = tm_map(corpus, stemDocument)
dtm = DocumentTermMatrix(corpus)
dtm
# A document-term matrix (1181 documents, 3289 terms)
#
# Non-/sparse entries: 8980/3875329
# Sparsity : 100%
# Maximal term length: 115
# Weighting : term frequency (tf)
sparse = removeSparseTerms(dtm, 0.995)
sparse
# A document-term matrix (1181 documents, 309 terms)
#
# Non-/sparse entries: 4669/360260
# Sparsity : 99%
# Maximal term length: 20
# Weighting : term frequency (tf)
Run Code Online (Sandbox Code Playgroud)
正如您所看到的,现在两种方法之间存储的元素和术语的数量完全匹配。
归档时间: |
|
查看次数: |
1880 次 |
最近记录: |