DFe*_*her 5 python callback scrapy web-scraping python-2.7
我正在使用scrapy 0.24从网站上抓取数据.但是,我无法通过回调方法发出任何请求parse_summary.
class ExampleSpider(scrapy.Spider):
name = "tfrrs"
allowed_domains = ["example.org"]
start_urls = (
'http://www.example.org/results_search.html?page=0&sport=track&title=1&go=1',
)
def __init__(self, *args, **kwargs):
super(TfrrsSpider, self).__init__(*args, **kwargs)
self.start_urls = ['http://www.example.org/results_search.html?page=0&sport=track'&title=1&go=1',]
pass
# works without issue
def parse(self, response):
races = response.xpath("//table[@width='100%']").xpath(".//a[starts-with(@href, 'http://www.tfrrs.org/results/')]/@href").extract()
callback = self.parse_trackfieldsummary
for race in races:
yield scrapy.Request(race, callback=self.parse_summary)
pass
# works without issue
def parse_summary(self, response):
baseurl = 'http://www.example.org/results/'
results = response.xpath("//div[@class='data']").xpath('.//a[@style="margin-left: 20px;"]/@href').extract()
for result in results:
print(baseurl+result) # shows that url is correct everytime
yield scrapy.Request(baseurl+result, callback=self.parse_compiled)
# is never fired or shown in terminal
def parse_compiled(self, response):
print('test')
results = response.xpath("//table[@style='width: 935px;']")
print(results)
Run Code Online (Sandbox Code Playgroud)
当我在整体上使请求parse_summary失败时(由于域错误等),我能够在提示中看到错误,但是当我使用正确的url时,它好像我甚至没有调用它.我还测试所请求的网址parse_summary的parse方法,在那里他们正常工作.什么可能导致他们不被解雇的parse_summary方法,但成功地在parse method?提前谢谢你的帮助.
在对我做了一些更改后Spider,我仍然有相同的结果.但是,如果我使用一个全新的项目,它就可以工作.所以我猜它与我的项目设置有关.
这是我的项目设置(raceretrieval我的项目名称在哪里):
BOT_NAME = 'raceretrieval'
DOWNLOAD_DELAY= 1
CONCURRENT_REQUESTS = 100
SPIDER_MODULES = ['raceretrieval.spiders']
NEWSPIDER_MODULE = 'raceretrieval.spiders'
ITEM_PIPELINES = {
'raceretrieval.pipelines.RaceValidationPipeline':1,
'raceretrieval.pipelines.RaceDistanceValidationPipeline':2,
# 'raceretrieval.pipelines.RaceUploadPipeline':9999
}
Run Code Online (Sandbox Code Playgroud)
如果我注释掉两 DOWNLOAD_DELAY= 1和CONCURRENT_REQUESTS = 100,蜘蛛按预期工作.为什么会这样?我不明白他们会如何影响这一点.
我更正了一些拼写错误,并正确设置了允许的域,并且parse_summary似乎工作正常。提取Urls,并在终端中正确显示parse_compile结果。
输出的行如下:
2014-12-29 12:19:05+0100 [example] DEBUG: Crawled (200) <GET
http://www.tfrrs.org/results/36288_f.html> (referer:
http://www.tfrrs.org/results/36288.html) <200
http://www.tfrrs.org/results/36288_f.html>
[<Selector xpath="//table[@style='width: 935px;']" data=u'<table width="0" border="0" cellspacing='>, <Selector xpath="//table[@style='width: 935px;']" data=u'<table width="0" border="0" cellspacing='> .....
Run Code Online (Sandbox Code Playgroud)
这是更正的代码:
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["tfrrs.org"]
start_urls = (
'http://www.tfrrs.org/results_search.html?page=0&sport=track&title=1&go=1',
)
def __init__(self, *args, **kwargs):
super(ExampleSpider, self).__init__(*args, **kwargs)
self.start_urls = ['http://www.tfrrs.org/results_search.html?page=0&sport=track&title=1&go=1',]
# works without issue
def parse(self, response):
races = response.xpath("//table[@width='100%']").xpath(".//a[starts-with(@href, 'http://www.tfrrs.org/results/')]/@href").extract()
#callback = self.parse_trackfieldsummary
for race in races:
yield scrapy.Request(race, callback=self.parse_summary)
pass
# works without issue
def parse_summary(self, response):
baseurl = 'http://www.tfrrs.org/results/'
results = response.xpath("//div[@class='data']").xpath('.//a[@style="margin-left: 20px;"]/@href').extract()
for result in results:
#print(baseurl+result) # shows that url is correct everytime
yield scrapy.Request(baseurl+result, callback=self.parse_compiled)
# is never fired or shown in terminal
def parse_compiled(self, response):
print(response)
results = response.xpath("//table[@style='width: 935px;']")
print(results)
Run Code Online (Sandbox Code Playgroud)