我试图删除/删除我不再使用的静态IP地址,但没有看到这样做的方法.
我从文档中得到的最接近的是这个页面,上面写着:
PS - 感谢您指出我正确的方向.如果有人想快速了解如何做到这一点,这是一个简短的视频,关于如何以及为什么在不使用时发布IP是一个好的做法https://youtu.be/LmZYVYnNn3k?t=3m44s.希望有所帮助.
当实例停止时,您仍然可以执行可能影响已停止实例的操作,例如:
- [...]
- 删除或设置新的静态IP
是否有可能在NLTK中获得一个短语的一致性?
import nltk
from nltk.corpus import PlaintextCorpusReader
corpus_loc = "c://temp//text//"
files = ".*\.txt"
read_corpus = PlaintextCorpusReader(corpus_loc, files)
corpus = nltk.Text(read_corpus.words())
test = nltk.TextCollection(corpus_loc)
corpus.concordance("claim")
Run Code Online (Sandbox Code Playgroud)
例如上面的回报
on okay okay okay i can give you the claim number and my information and
decide on the shop okay okay so the claim number is xxxx - xx - xxxx got
Run Code Online (Sandbox Code Playgroud)
现在,如果我尝试corpus.concordance("claim number")
它不起作用...我确实有代码通过使用.partition()
方法和相同的一些进一步编码来做到这一点...但我想知道是否可以使用相同的concordance
.
我正在尝试使用Python机制检查考试的日期/时间,如果结果中有特定日期/时间,则向某人发送电子邮件(结果页面截图附后)
import mechanize
from BeautifulSoup import BeautifulSoup
URL = "http://secure.dre.ca.gov/PublicASP/CurrentExams.asp"
br = mechanize.Browser()
response = br.open(URL)
# there are some errors in doctype and hence filtering the page content a bit
response.set_data(response.get_data()[200:])
br.set_response(response)
br.select_form(name="entry_form")
# select Oakland for the 1st set of checkboxes
for i in range(0, len(br.find_control(type="checkbox",name="cb_examSites").items)):
if i ==2:
br.find_control(type="checkbox",name="cb_examSites").items[i].selected =True
# select salesperson for the 2nd set of checkboxes
for i in range(0, len(br.find_control(type="checkbox",name="cb_examTypes").items)):
if i ==1:
br.find_control(type="checkbox",name="cb_examTypes").items[i].selected =True
reponse = br.submit()
print reponse.read()
Run Code Online (Sandbox Code Playgroud)
我能够得到响应,但由于某种原因,我的表中的数据丢失了
这是最初的html页面中的按钮
<input …
Run Code Online (Sandbox Code Playgroud) I am using the following code to create an index and load data in elastic search
from elasticsearch import helpers, Elasticsearch
import csv
es = Elasticsearch()
es = Elasticsearch('localhost:9200')
index_name='wordcloud_data'
with open('./csv-data/' + index_name +'.csv') as f:
reader = csv.DictReader(f)
helpers.bulk(es, reader, index=index_name, doc_type='my-type')
print ("done")
Run Code Online (Sandbox Code Playgroud)
My CSV data is as follows
date,word_data,word_count
2017-06-17,luxury vehicle,11
2017-06-17,signifies acceptance,17
2017-06-17,agency imposed,16
2017-06-17,customer appreciation,11
Run Code Online (Sandbox Code Playgroud)
The data loads fine but then the datatype is not accurate How do I force it to say that …
我根据http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html上的文档提供了这段代码.
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
my_bunch = load_files("c:\\temp\\billing_test\\")
my_data = my_bunch['data']
print (my_bunch.keys())
print('target_names',my_bunch['target_names'])
print('length of data' , len(my_bunch['data']))
X_train_counts = count_vect.fit_transform(my_data)
print(X_train_counts.shape)
print ( count_vect.vocabulary_.get(u'algorithm'))
Run Code Online (Sandbox Code Playgroud)
输出如下
dict_keys(['target', 'filenames', 'target_names', 'data', 'DESCR'])
target_names ['false', 'true']
length of data 920
(920, 8773)
None
Run Code Online (Sandbox Code Playgroud)
不知道为什么"无"朝下(920,8773)
我在每个文件夹"true"和"false"中有大约460个文本文档
谢谢,