从pandas DataFrame创建术语密度矩阵的有效方法

nik*_*osd 7 python r nltk pandas

我正在尝试从pandas数据帧创建一个术语密度矩阵,因此我可以对出现在数据帧中的术语进行评级.我还希望能够保持我的数据的"空间"方面(请参阅帖子末尾的评论,以获取我的意思).

我是pandas和NLTK的新手,所以我希望我的问题可以解决一些现有的工具.

我有一个数据框,其中包含两个感兴趣的列:说'title'和'page'

    import pandas as pd
    import re

    df = pd.DataFrame({'title':['Delicious boiled egg','Fried egg ','Split orange','Something else'], 'page':[1, 2, 3, 4]})
    df.head()

       page                 title
    0     1  Delicious boiled egg
    1     2            Fried egg 
    2     3          Split orange
    3     4        Something else
Run Code Online (Sandbox Code Playgroud)

我的目标是清理文本,并将感兴趣的条款传递给TDM数据帧.我使用两个函数来帮助我清理字符串

    import nltk.classify
    from nltk.tokenize import wordpunct_tokenize
    from nltk.corpus import stopwords
    import string   

    def remove_punct(strin):
        '''
        returns a string with the punctuation marks removed, and all lower case letters
        input: strin, an ascii string. convert using strin.encode('ascii','ignore') if it is unicode 
        '''
        return strin.translate(string.maketrans("",""), string.punctuation).lower()

    sw = stopwords.words('english')

    def tok_cln(strin):
        '''
        tokenizes string and removes stopwords
        '''
        return set(nltk.wordpunct_tokenize(strin)).difference(sw)
Run Code Online (Sandbox Code Playgroud)

还有一个执行数据帧操作的函数

    def df2tdm(df,titleColumn,placementColumn,newPlacementColumn):
        '''
        takes in a DataFrame with at least two columns, and returns a dataframe with the term density matrix
        of the words appearing in the titleColumn
        Inputs: df, a DataFrame containing titleColumn, placementColumn among others
        Outputs: tdm_df, a DataFrame containing newPlacementColumn and columns with all the terms in df[titleColumn]
        '''
        tdm_df = pd.DataFrame(index=df.index, columns=[newPlacementColumn])
        tdm_df = tdm_df.fillna(0)
        for idx in df.index:
            for word in tok_cln( remove_punct(df[titleColumn][idx].encode('ascii','ignore')) ):
                if word not in tdm_df.columns:
                    newcol = pd.DataFrame(index = df.index, columns = [word])
                    tdm_df = tdm_df.join(newcol)
        tdm_df[newPlacementColumn][idx] = df[placementColumn][idx]
        tdm_df[word][idx] = 1
        return tdm_df.fillna(0,inplace = False)

    tdm_df = df2tdm(df,'title','page','pub_page')
    tdm_df.head()
Run Code Online (Sandbox Code Playgroud)

这回来了

      pub_page boiled egg delicious fried orange split something else
    0        1      1   1         1     0      0     0         0    0
    1        2      0   1         0     1      0     0         0    0
    2        3      0   0         0     0      1     1         0    0
    3        4      0   0         0     0      0     0         1    1
Run Code Online (Sandbox Code Playgroud)

但是在解析大型集合(输出数十万行,数千列)时,它会非常缓慢.我的两个问题:

我可以加快这个实施吗?

有没有其他工具可以用来完成这项工作?

我希望能够保持我的数据的"空间"方面,例如,如果'egg'经常出现在1-10页中,然后经常在500-520页中重新出现,我想知道这一点.

her*_*rfz 20

你可以使用scikit-learn的CountVectorizer:

In [14]: from sklearn.feature_extraction.text import CountVectorizer

In [15]: countvec = CountVectorizer()

In [16]: countvec.fit_transform(df.title)
Out[16]: 
<4x8 sparse matrix of type '<type 'numpy.int64'>'
    with 9 stored elements in Compressed Sparse Column format>
Run Code Online (Sandbox Code Playgroud)

它以稀疏表示形式返回术语文档矩阵,因为这样的矩阵通常很大,而且很稀疏.

对于您的特定示例,我猜将其转换回DataFrame仍然有效:

In [17]: pd.DataFrame(countvec.fit_transform(df.title).toarray(), columns=countvec.get_feature_names())
Out[17]: 
   boiled  delicious  egg  else  fried  orange  something  split
0       1          1    1     0      0       0          0      0
1       0          0    1     0      1       0          0      0
2       0          0    0     0      0       1          0      1
3       0          0    0     1      0       0          1      0

[4 rows x 8 columns]
Run Code Online (Sandbox Code Playgroud)