Abh*_*Rai 3 python python-3.x pandas
尝试将 Twitter 数据清理为 panda 数据框。我好像少了一步。在我处理完所有推文后,我想我错过了用旧推文覆盖新推文的情况吗?当我保存文件时,我发现推文没有任何变化。我缺少什么?
import pandas as pd
import re
import emoji
import nltk
nltk.download('words')
words = set(nltk.corpus.words.words())
trump_df = pd.read_csv('new_Trump.csv')
for tweet in trump_df['tweet']:
tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
tweet = re.sub(r"(?:\@|http?\://|https?\://|www)\S+", "", tweet) #Remove http links
tweet = " ".join(tweet.split())
tweet = ''.join(c for c in tweet if c not in emoji.UNICODE_EMOJI) #Remove Emojis
tweet = tweet.replace("#", "").replace("_", " ") #Remove hashtag sign but keep the text
tweet = " ".join(w for w in nltk.wordpunct_tokenize(tweet) \
if w.lower() in words or not w.isalpha()) #Remove non-english tweets (not 100% success)
print(tweet)
trump_df.to_csv('new_Trump.csv')
Run Code Online (Sandbox Code Playgroud)
正如您说得很好,您永远不会存储回数据,让我们创建一个完成所有工作的函数,然后使用map. 它比循环遍历数据帧中的每个值并将其存储到列表中(选项 B)更有效。
def cleaner(tweet):
tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
tweet = re.sub(r"(?:\@|http?\://|https?\://|www)\S+", "", tweet) #Remove http links
tweet = " ".join(tweet.split())
tweet = ''.join(c for c in tweet if c not in emoji.UNICODE_EMOJI) #Remove Emojis
tweet = tweet.replace("#", "").replace("_", " ") #Remove hashtag sign but keep the text
tweet = " ".join(w for w in nltk.wordpunct_tokenize(tweet) \
if w.lower() in words or not w.isalpha())
return tweet
trump_df['tweet'] = trump_df['tweet'].map(lambda x: cleaner(x))
trump_df.to_csv('') #specify location
Run Code Online (Sandbox Code Playgroud)
这将tweet用修改覆盖该列。
如前所述,我认为这将被证明效率较低,但它就像在循环之前创建一个列表一样简单for,并用每条干净的推文填充它。
clean_tweets = []
for tweet in trump_df['tweet']:
tweet = re.sub("@[A-Za-z0-9]+","",tweet) #Remove @ sign
##Here's where all the cleaning takes place
clean_tweets.append(tweet)
trump_df['tweet'] = clean_tweets
trump_df.to_csv('') #Specify location
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
9830 次 |
| 最近记录: |