使用 NLTK 提取单词词干 (python)

use*_*568 4 python stemming

我是Python文本处理的新手,我正在尝试在文本文档中提取单词,大约有5000行。

我写了下面的脚本

from nltk.corpus import stopwords # Import the stop word list
from nltk.stem.snowball import SnowballStemmer

stemmer = SnowballStemmer('english')

def Description_to_words(raw_Description ):
    # 1. Remove HTML
    Description_text = BeautifulSoup(raw_Description).get_text() 
    # 2. Remove non-letters        
    letters_only = re.sub("[^a-zA-Z]", " ", Description_text) 
    # 3. Convert to lower case, split into individual words
    words = letters_only.lower().split()                       

    stops = set(stopwords.words("english"))                  
    # 5. Remove stop words
    meaningful_words = [w for w in words if not w in stops]   
    # 5. stem words
    words = ([stemmer.stem(w) for w in words])

    # 6. Join the words back into one string separated by space, 
    # and return the result.
    return( " ".join( meaningful_words ))   

clean_Description = Description_to_words(train["Description"][15])
Run Code Online (Sandbox Code Playgroud)

但是当我测试结果时,单词没有被阻止,任何人都可以帮助我知道问题是什么,我在“Description_to_words”函数中做错了什么

而且,当我像下面这样单独执行 Stem 命令时,它会起作用。

from nltk.tokenize import sent_tokenize, word_tokenize
>>> words = word_tokenize("MOBILE APP - Unable to add reading")
>>> 
>>> for w in words:
...     print(stemmer.stem(w))
... 
mobil
app
-
unabl
to
add
read
Run Code Online (Sandbox Code Playgroud)

cs9*_*s95 5

这是您的函数的每个步骤,已修复。

  1. 删除 HTML。

    Description_text = BeautifulSoup(raw_Description).get_text() 
    
    Run Code Online (Sandbox Code Playgroud)
  2. 删除非字母,但暂时不要删除空格。您还可以稍微简化您的正则表达式。

    letters_only = re.sub("[^\w\s]", " ", Description_text) 
    
    Run Code Online (Sandbox Code Playgroud)
  3. 转换为小写,分成单独的单词:我建议word_tokenize再次使用,在这里。

    from nltk.tokenize import word_tokenize
    words = word_tokenize(letters_only.lower())                  
    
    Run Code Online (Sandbox Code Playgroud)
  4. 删除停用词。

    stops = set(stopwords.words("english")) 
    meaningful_words = [w for w in words if not w in stops]   
    
    Run Code Online (Sandbox Code Playgroud)
  5. 词干。这是另一个问题。干meaningful_words,不是words

    return ' '.join(stemmer.stem(w) for w in meaningful_words])
    
    Run Code Online (Sandbox Code Playgroud)

  • @user3734568是的,你可以,只需将 `stemmer.stem(w)` 更改为 `lemmatizer.lemmatize(word)` (2认同)