kRa*_*y R 11 twitter r text-mining data-cleaning
我使用twitteR包从twitter中提取推文并将其保存到文本文件中.
我在语料库上进行了以下操作
xx<-tm_map(xx,removeNumbers, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,stripWhitespace, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,removePunctuation, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,strip_retweets, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,removeWords,stopwords(english), lazy=TRUE, 'mc.cores=1')
Run Code Online (Sandbox Code Playgroud)
(使用mc.cores = 1和lazy = True,否则Mac上的R运行错误)
tdm<-TermDocumentMatrix(xx)
Run Code Online (Sandbox Code Playgroud)
但是这个术语文档矩阵有很多奇怪的符号,无意义的单词等.如果推文是
RT @Foxtel: One man stands between us and annihilation: @IanZiering.
Sharknado‚Äã 3: OH HELL NO! - July 23 on Foxtel @SyfyAU
Run Code Online (Sandbox Code Playgroud)
清理完推文后,我只想留下适当的完整英文单词,即句子/短语无效(用户名,缩短的单词,网址)
例:
One man stands between us and annihilation oh hell no on
Run Code Online (Sandbox Code Playgroud)
(注意:tm包中的转换命令只能删除停用词,标点符号空格以及转换为小写)
kRa*_*y R 13
使用gsub和
stringr包
我已经找到了部分解决方案,用于删除转推,对屏幕名称,主题标签,空格,数字,标点符号,网址的引用.
clean_tweet = gsub("&", "", unclean_tweet)
clean_tweet = gsub("(RT|via)((?:\\b\\W*@\\w+)+)", "", clean_tweet)
clean_tweet = gsub("@\\w+", "", clean_tweet)
clean_tweet = gsub("[[:punct:]]", "", clean_tweet)
clean_tweet = gsub("[[:digit:]]", "", clean_tweet)
clean_tweet = gsub("http\\w+", "", clean_tweet)
clean_tweet = gsub("[ \t]{2,}", "", clean_tweet)
clean_tweet = gsub("^\\s+|\\s+$", "", clean_tweet)
Run Code Online (Sandbox Code Playgroud)
参考:(希克斯,2014)经过以上我做了以下.
#get rid of unnecessary spaces
clean_tweet <- str_replace_all(clean_tweet," "," ")
# Get rid of URLs
clean_tweet <- str_replace_all(clean_tweet, "http://t.co/[a-z,A-Z,0-9]*{8}","")
# Take out retweet header, there is only one
clean_tweet <- str_replace(clean_tweet,"RT @[a-z,A-Z]*: ","")
# Get rid of hashtags
clean_tweet <- str_replace_all(clean_tweet,"#[a-z,A-Z]*","")
# Get rid of references to other screennames
clean_tweet <- str_replace_all(clean_tweet,"@[a-z,A-Z]*","")
Run Code Online (Sandbox Code Playgroud)
参考:(斯坦顿2013)
在执行上述任何操作之前,我使用下面的内容将整个字符串折叠成一个长字符.
paste(mytweets, collapse=" ")
与tm_map转换相反,这个清理过程对我很有用.
我现在剩下的就是一套正确的单词和一些不正确的单词.现在,我只需要弄清楚如何删除不合适的英语单词.可能我将不得不从单词词典中减去我的一组单词.
library(tidyverse)
clean_tweets <- function(x) {
x %>%
# Remove URLs
str_remove_all(" ?(f|ht)(tp)(s?)(://)(.*)[.|/](.*)") %>%
# Remove mentions e.g. "@my_account"
str_remove_all("@[[:alnum:]_]{4,}") %>%
# Remove hashtags
str_remove_all("#[[:alnum:]_]+") %>%
# Replace "&" character reference with "and"
str_replace_all("&", "and") %>%
# Remove puntucation, using a standard character class
str_remove_all("[[:punct:]]") %>%
# Remove "RT: " from beginning of retweets
str_remove_all("^RT:? ") %>%
# Replace any newline characters with a space
str_replace_all("\\\n", " ") %>%
# Make everything lowercase
str_to_lower() %>%
# Remove any trailing whitespace around the text
str_trim("both")
}
tweets %>% clean_tweets
Run Code Online (Sandbox Code Playgroud)