ali*_*ong 19 python nlp nltk sentiment-analysis vader
我正在使用Vader SentimentAnalyzer获取极性分数.之前我使用了正/负/中性的概率分数,但我刚刚意识到"复合"分数,范围从-1(大多数负)到1(大多数pos)将提供单一的极性测量.我想知道如何计算"复合"分数.这是从[pos,neu,neg]向量计算的吗?
alv*_*vas 56
VADER算法将情绪分数输出为4类情绪https://github.com/nltk/nltk/blob/develop/nltk/sentiment/vader.py#L441:
neg:否定neu:中立pos:积极的 compound:复合(即总分)让我们来看看代码,第一个化合物实例是https://github.com/nltk/nltk/blob/develop/nltk/sentiment/vader.py#L421,它计算:
compound = normalize(sum_s)
Run Code Online (Sandbox Code Playgroud)
该normalize()函数在https://github.com/nltk/nltk/blob/develop/nltk/sentiment/vader.py#L107中定义如下:
def normalize(score, alpha=15):
"""
Normalize the score to be between -1 and 1 using an alpha that
approximates the max expected value
"""
norm_score = score/math.sqrt((score*score) + alpha)
return norm_score
Run Code Online (Sandbox Code Playgroud)
所以有一个超参数alpha.
至于sum_s,它是传递给score_valence()函数的情绪参数的总和https://github.com/nltk/nltk/blob/develop/nltk/sentiment/vader.py#L413
如果我们追溯这个sentiment参数,我们会看到它是polarity_scores()在https://github.com/nltk/nltk/blob/develop/nltk/sentiment/vader.py#L217调用函数时计算的:
def polarity_scores(self, text):
"""
Return a float for sentiment strength based on the input text.
Positive values are positive valence, negative value are negative
valence.
"""
sentitext = SentiText(text)
#text, words_and_emoticons, is_cap_diff = self.preprocess(text)
sentiments = []
words_and_emoticons = sentitext.words_and_emoticons
for item in words_and_emoticons:
valence = 0
i = words_and_emoticons.index(item)
if (i < len(words_and_emoticons) - 1 and item.lower() == "kind" and \
words_and_emoticons[i+1].lower() == "of") or \
item.lower() in BOOSTER_DICT:
sentiments.append(valence)
continue
sentiments = self.sentiment_valence(valence, sentitext, item, i, sentiments)
sentiments = self._but_check(words_and_emoticons, sentiments)
Run Code Online (Sandbox Code Playgroud)
看一下这个polarity_scores函数,它正在做的是迭代整个SentiText词典并检查基于规则的sentiment_valence()函数,将价值分数分配给情绪https://github.com/nltk/nltk/blob/develop/nltk/ sentiment/vader.py#L243,请参阅http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf的 2.1.1节.
所以回到复合分数,我们看到:
compound分数的标准化分数sum_s和sum_s 是基于某些启发式和情感词典(又称情感强度)计算的效价之和sum_s除以其平方加上增加归一化函数分母的α参数.这是从[pos,neu,neg]向量计算的吗?
不是真的=)
如果我们看看score_valence函数https://github.com/nltk/nltk/blob/develop/nltk/sentiment/vader.py#L411,我们看到复合得分是用sum_spos之前的计算,neg和使用_sift_sentiment_scores()计算neu得分,使用sentiment_valence()没有总和的原始得分计算invidiual pos,neg和neu得分.
如果我们看看这个alphamathemagic,似乎归一化的输出相当不稳定(如果不受约束),取决于以下值alpha:
alpha=0:
alpha=15:
alpha=50000:
alpha=0.001:
当它是否定的时候变得很时髦:
alpha=-10:
alpha=-1,000,000:
alpha=-1,000,000,000:
| 归档时间: |
|
| 查看次数: |
20309 次 |
| 最近记录: |