考虑到多个网址,我正在尝试抓取并抓取多个网页.我正在使用维基百科进行测试,为了更容易,我只为每个页面使用了相同的Xpath选择器,但我最终想要使用每个页面独有的许多不同的Xpath选择器,因此每个页面都有自己独立的parsePage方法.
当我不使用项目加载器时,此代码可以正常工作,并且只是直接填充项目.当我使用项目加载器时,项目被奇怪地填充,并且它似乎完全忽略了在parse方法中分配的回调并且仅使用start_urls用于parsePage方法.
import scrapy
from scrapy.http import Request
from scrapy import Spider, Request, Selector
from testanother.items import TestItems, TheLoader
class tester(scrapy.Spider):
name = 'vs'
handle_httpstatus_list = [404, 200, 300]
#Usually, I only get data from the first start url
start_urls = ['https://en.wikipedia.org/wiki/SANZAAR','https://en.wikipedia.org/wiki/2016_Rugby_Championship','https://en.wikipedia.org/wiki/2016_Super_Rugby_season']
def parse(self, response):
#item = TestItems()
l = TheLoader(item=TestItems(), response=response)
#when I use an item loader, the url in the request is completely ignored. without the item loader, it works properly.
request = Request("https://en.wikipedia.org/wiki/2016_Rugby_Championship", callback=self.parsePage1, meta={'loadernext':l}, dont_filter=True)
yield request
request = Request("https://en.wikipedia.org/wiki/SANZAAR", callback=self.parsePage2, meta={'loadernext1': l}, dont_filter=True)
yield request
yield Request("https://en.wikipedia.org/wiki/2016_Super_Rugby_season", callback=self.parsePage3, meta={'loadernext2': l}, dont_filter=True)
def parsePage1(self,response):
loadernext = response.meta['loadernext']
loadernext.add_xpath('title1', '//*[@id="firstHeading"]/text()')
return loadernext.load_item()
#I'm not sure if this return and load_item is the problem, because I've tried yielding/returning to another method that does the item loading instead and the first start url is still the only url scraped.
def parsePage2(self,response):
loadernext1 = response.meta['loadernext1']
loadernext1.add_xpath('title2', '//*[@id="firstHeading"]/text()')
return loadernext1.load_item()
def parsePage3(self,response):
loadernext2 = response.meta['loadernext2']
loadernext2.add_xpath('title3', '//*[@id="firstHeading"]/text()')
return loadernext2.load_item()
Run Code Online (Sandbox Code Playgroud)
这是我不使用项目加载器时的结果:
{'title1': [u'2016 Rugby Championship'],
'title': [u'SANZAAR'],
'title3': [u'2016 Super Rugby season']}
Run Code Online (Sandbox Code Playgroud)
这是项目加载器的一些日志:
{'title2': u'SANZAAR'}
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Rugby_Championship> (referer: https://en.wikipedia.org/wiki/SANZAAR)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Rugby_Championship> (referer: https://en.wikipedia.org/wiki/2016_Rugby_Championship)
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Super_Rugby_season>
{'title2': u'SANZAAR', 'title3': u'SANZAAR'}
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/SANZAAR> (referer: https://en.wikipedia.org/wiki/2016_Rugby_Championship)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Rugby_Championship> (referer: https://en.wikipedia.org/wiki/2016_Super_Rugby_season)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Super_Rugby_season> (referer: https://en.wikipedia.org/wiki/2016_Rugby_Championship)
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/2016_Super_Rugby_season> (referer: https://en.wikipedia.org/wiki/2016_Super_Rugby_season)
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Rugby_Championship>
{'title1': u'SANZAAR', 'title2': u'SANZAAR', 'title3': u'SANZAAR'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Rugby_Championship>
{'title1': u'2016 Rugby Championship'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/SANZAAR>
{'title1': u'2016 Rugby Championship', 'title2': u'2016 Rugby Championship'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Rugby_Championship>
{'title1': u'2016 Super Rugby season'}
2016-09-24 14:30:43 [scrapy] DEBUG: Crawled (200) <GET https://en.wikipedia.org/wiki/SANZAAR> (referer: https://en.wikipedia.org/wiki/2016_Super_Rugby_season)
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Super_Rugby_season>
{'title1': u'2016 Rugby Championship',
'title2': u'2016 Rugby Championship',
'title3': u'2016 Rugby Championship'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/2016_Super_Rugby_season>
{'title1': u'2016 Super Rugby season', 'title3': u'2016 Super Rugby season'}
2016-09-24 14:30:43 [scrapy] DEBUG: Scraped from <200 https://en.wikipedia.org/wiki/SANZAAR>
{'title1': u'2016 Super Rugby season',
'title2': u'2016 Super Rugby season',
'title3': u'2016 Super Rugby season'}
2016-09-24 14:30:43 [scrapy] INFO: Clos
Run Code Online (Sandbox Code Playgroud)
究竟出了什么问题?谢谢!
sta*_*ify 12
一个问题是您将同一项加载器实例的多个引用传递给多个回调,例如,有两个yield request指令parse.
此外,在随访回调,装载机仍然使用旧的response对象,例如在parsePage1项目装载机仍是操作上response的parse.
在大多数情况下,不建议将项目加载器传递给另一个回调.或者,您可能会发现直接传递项目对象更好.
这是一个简短(不完整)的例子,通过编辑你的代码:
def parse(self, response):
l = TheLoader(item=TestItems(), response=response)
request = Request(
"https://en.wikipedia.org/wiki/2016_Rugby_Championship",
callback=self.parsePage1,
meta={'item': l.load_item()},
dont_filter=True
)
yield request
def parsePage1(self,response):
loadernext = TheLoader(item=response.meta['item'], response=response)
loadernext.add_xpath('title1', '//*[@id="firstHeading"]/text()')
return loadernext.load_item()
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1618 次 |
| 最近记录: |