我有个问题.我需要停止一段时间的函数执行,但不要停止整个解析的实现.也就是说,我需要一个非阻塞暂停.
它看起来像:
class ScrapySpider(Spider):
name = 'live_function'
def start_requests(self):
yield Request('some url', callback=self.non_stop_function)
def non_stop_function(self, response):
for url in ['url1', 'url2', 'url3', 'more urls']:
yield Request(url, callback=self.second_parse_function)
# Here I need some function for sleep only this function like time.sleep(10)
yield Request('some url', callback=self.non_stop_function) # Call itself
def second_parse_function(self, response):
pass
Run Code Online (Sandbox Code Playgroud)
函数non_stop_function需要暂停一段时间,但不应该阻止输出的其余部分.
如果我插入time.sleep()- 它将停止整个解析器,但我不需要它.是否可以使用twisted其他功能停止一个功能?
原因:我需要创建一个非阻塞函数,每n秒解析一次网站页面.在那里,她将获得网址并填写10秒钟.已获取的URL将继续有效,但主要功能需要休眠.
更新:
感谢TkTech和viach.一个答案帮助我理解了如何进行挂起Request,第二个是如何激活它.两个答案相互补充,我为Scrapy做了一个非常好的非阻塞暂停:
def call_after_pause(self, response):
d = Deferred()
reactor.callLater(10.0, d.callback, Request(
'https://example.com/',
callback=self.non_stop_function, …Run Code Online (Sandbox Code Playgroud) <div class="info">
<h3> Height:
<span>1.1</span>
</h3>
</div>
<div class="info">
<h3> Number:
<span>111111111</span>
</h3>
</div>
Run Code Online (Sandbox Code Playgroud)
这是该网站的一部分。最终,我想提取 111111111。我知道我可以
soup.find_all("div", { "class" : "info" })
获取两个 div 的列表;但是,我宁愿不必执行循环来检查它是否包含文本“Number”。
是否有一种更优雅的方法来提取“1111111”,以便它确实如此soup.find_all("div", { "class" : "info" }),但也使其必须包含“Number”?
我也尝试过numberSoup = soup.find('h3', text='Number')
,但它返回None
import requests
from bs4 import BeautifulSoup
def spider(max_page):
page = 1
while page <= max_page:
url = 'https://thenewboston.com/forum/recent_activity.php?page=' + str(page)
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text, "html.parser")
for link in soup.findAll('a', {'class': 'title text-semibold'}):
href = link.get('href')
print(href)
page += 1
spider(1)
output---------------------------------
C:\Users\Edwardo\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/Edwardo/PycharmProjects/pythonJourney/spider.py
Traceback (most recent call last):
File "C:/Users/Edwardo/PycharmProjects/pythonJourney/spider.py", line 14, in <module>
spider(1)
File "C:/Users/Edwardo/PycharmProjects/pythonJourney/spider.py", line 7, in spider
source_code = requests.get(url)
AttributeError: module 'requests' has no attribute 'get'
Process finished with exit …Run Code Online (Sandbox Code Playgroud) 我试图找出列表列表中有多少空列表.我试过计算多少列表是1的长度,但在python中它给我的长度[]是0但长度[3,[]]是2.有没有办法,我可以计算列表中有多少空列表.
示例列表
[[1,[2,3,4],['hello',[]],['weather',['hot','rainy','sunny','cold']]]]
Run Code Online (Sandbox Code Playgroud)
所以我想将hello列表计为1或计算这个总字符串中有多少空列表,即1.