Python Scrapy解析与另一个函数的提取链接

Sha*_*ady 4 python scrapy web-scraping scrapy-spider

我是scrapy的新手我试图刮掉黄页用于学习目的一切正常,但我想要电子邮件地址,但要做到这一点,我需要访问解析内部提取的链接,并用另一个parse_email函数解析它,但它不会炒.

我的意思是我测试了它运行的parse_email函数,但它不能从主解析函数内部工作,我希望parse_email函数获取链接的源,所以我使用回调调用parse_email函数,但它只返回这些链接<GET https://www.yellowpages.com/los-angeles-ca/mip/palm-tree-la-7254813?lid=7254813> 它应该返回电子邮件由于某种原因parse_email函数不起作用,只是返回链接而不打开页面

这是我评论过的部分代码

import scrapy
import requests
from urlparse import urljoin

scrapy.optional_features.remove('boto')

class YellowSpider(scrapy.Spider):
    name = 'yellow spider'
    start_urls = ['https://www.yellowpages.com/search?search_terms=restaurant&geo_location_terms=Los+Angeles%2C+CA']

    def parse(self, response):
        SET_SELECTOR = '.info'
        for brickset in response.css(SET_SELECTOR):

            NAME_SELECTOR = 'h3 a ::text'
            ADDRESS_SELECTOR = '.adr ::text'
            PHONE = '.phone.primary ::text'
            WEBSITE = '.links a ::attr(href)'


            #Getiing the link of the page that has the email usiing this selector
            EMAIL_SELECTOR = 'h3 a ::attr(href)'

            #extracting the link
            email = brickset.css(EMAIL_SELECTOR).extract_first()

            #joining and making complete url
            url = urljoin(response.url, brickset.css('h3 a ::attr(href)').extract_first())



            yield {
                'name': brickset.css(NAME_SELECTOR).extract_first(),
                'address': brickset.css(ADDRESS_SELECTOR).extract_first(),
                'phone': brickset.css(PHONE).extract_first(),
                'website': brickset.css(WEBSITE).extract_first(),

                #ONLY Returning Link of the page not calling the function

                'email': scrapy.Request(url, callback=self.parse_email),
            }

        NEXT_PAGE_SELECTOR = '.pagination ul a ::attr(href)'
        next_page = response.css(NEXT_PAGE_SELECTOR).extract()[-1]
        if next_page:
            yield scrapy.Request(
                response.urljoin(next_page),
                callback=self.parse
            )

    def parse_email(self, response):

        #xpath for the email address in the nested page

        EMAIL_SELECTOR = '//a[@class="email-business"]/@href'

        #returning the extracted email WORKS XPATH WORKS I CHECKED BUT FUNCTION NOT CALLING FOR SOME REASON
        yield {
            'email': response.xpath(EMAIL_SELECTOR).extract_first().replace('mailto:', '')
        }
Run Code Online (Sandbox Code Playgroud)

我不知道我做错了什么

luf*_*fte 7

您产生一个dictRequest它的内部,Scrapy不会调度它,因为它不知道它的存在(他们没有得到他们创造后自动调度).你需要屈服于实际Request.

在该parse_email功能中,为了"记住"每个电子邮件所属的项目,您需要将其余的项目数据与请求一起传递.您可以使用meta参数执行此操作.

例:

parse:

yield scrapy.Request(url, callback=self.parse_email, meta={'item': {
    'name': brickset.css(NAME_SELECTOR).extract_first(),
    'address': brickset.css(ADDRESS_SELECTOR).extract_first(),
    'phone': brickset.css(PHONE).extract_first(),
    'website': brickset.css(WEBSITE).extract_first(),
}})
Run Code Online (Sandbox Code Playgroud)

parse_email:

item = response.meta['item']  # The item this email belongs to
item['email'] = response.xpath(EMAIL_SELECTOR).extract_first().replace('mailto:', '')
return item
Run Code Online (Sandbox Code Playgroud)