有:
from twisted.internet import reactor
from scrapy.crawler import CrawlerProcess
Run Code Online (Sandbox Code Playgroud)
我总是成功地运行这个过程:
process = CrawlerProcess(get_project_settings())
process.crawl(*args)
# the script will block here until the crawling is finished
process.start()
Run Code Online (Sandbox Code Playgroud)
但是因为我已将此代码移动到web_crawler(self)函数中,如下所示:
def web_crawler(self):
# set up a crawler
process = CrawlerProcess(get_project_settings())
process.crawl(*args)
# the script will block here until the crawling is finished
process.start()
# (...)
return (result1, result2)
Run Code Online (Sandbox Code Playgroud)
并开始使用类实例化调用该方法,如:
def __call__(self):
results1 = test.web_crawler()[1]
results2 = test.web_crawler()[0]
Run Code Online (Sandbox Code Playgroud)
和运行:
test()
Run Code Online (Sandbox Code Playgroud)
我收到以下错误:
Traceback (most recent call last):
File "test.py", line 573, in <module> …Run Code Online (Sandbox Code Playgroud) 我目前正在尝试让 scrapy 在 Google Cloud Function 中运行。
from flask import escape
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
def hello_http(request):
settings = get_project_settings()
process = CrawlerProcess(settings)
process.crawl(BlogSpider)
process.start()
return 'Hello {}!'.format(escape("Word"))
Run Code Online (Sandbox Code Playgroud)
这有效,但奇怪的是,不是“一直”。每隔一段时间,HTTP 调用就会返回一个错误,然后我可以在堆栈驱动程序上读取:
Function execution took 509 ms, finished with status: 'crash'
我检查了蜘蛛,甚至将其简化为不会失败的东西,例如:
import scrapy
class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://blog.scrapinghub.com']
def parse(self, response):
yield { 'id': 1 }
Run Code Online (Sandbox Code Playgroud)
有人可以向我解释一下发生了什么事吗?
这可能是我达到的资源配额吗?