Scrapy CrawlSpider用于AJAX内容

Bad*_*ger 11 python scrapy web-scraping

我试图抓取一个网站的新闻文章.我的start_url包含:

(1)每篇文章的链接:http://example.com/symbol/TSLA

(2)一个"更多"按钮,用于在同一个start_url中动态加载更多文章的AJAX调用:http://example.com/account/ajax_headlines_content? type = in_focus_articles&page = 0&plug = tsla&is_symbol_page = true

AJAX调用的参数是"页面",每次单击"更多"按钮时,该页面都会递增.例如,单击"更多"一次将加载另外n篇文章,并在Click事件的"更多"按钮中更新页面参数,以便下次单击"更多"时,"页面"将加载两篇文章(假设"页面"0最初加载,第一次点击加载"页面"1.

对于每个"页面",我想使用规则来抓取每篇文章的内容,但我不知道有多少"页面",我不想选择任意m(例如,10k).我似乎无法弄清楚如何设置它.

从这个问题,按顺序Scrapy抓取网址,我已经尝试创建潜在网址的网址列表,但我无法确定在解析之前的网址并确保其包含新闻链接后从池中发送新网址的方式和位置对于CrawlSpider.我的规则发送对parse_items回调的响应,其中解析文章内容.

有没有办法在应用规则和调用parse_items之前观察链接页面的内容(类似于BaseSpider示例),以便我知道何时停止抓取?

简化代码(为了清楚起见,我删除了几个我正在解析的字段):

class ExampleSite(CrawlSpider):

    name = "so"
    download_delay = 2

    more_pages = True
    current_page = 0

    allowed_domains = ['example.com']

    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']

    ##could also use
    ##start_urls = ['http://example.com/symbol/tsla']

    ajax_urls = []                                                                                                                                                                                                                                                                                                                                                                                                                          
    for i in range(1,1000):
        ajax_urls.append('http://example.com/account/ajax_headlines_content?type=in_focus_articles&page='+str(i)+
                      '&slugs=tsla&is_symbol_page=true')

    rules = (
             Rule(SgmlLinkExtractor(allow=('/symbol/tsla', ))),
             Rule(SgmlLinkExtractor(allow=('/news-article.*tesla.*', '/article.*tesla.*', )), callback='parse_item')
            )

        ##need something like this??
        ##override parse?
        ## if response.body == 'no results':
            ## self.more_pages = False
            ## ##stop crawler??   
        ## else: 
            ## self.current_page = self.current_page + 1
            ## yield Request(self.ajax_urls[self.current_page], callback=self.parse_start_url)


    def parse_item(self, response):

        self.log("Scraping: %s" % response.url, level=log.INFO)

        hxs = Selector(response)

        item = NewsItem()

        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

        yield item
Run Code Online (Sandbox Code Playgroud)

Paw*_*ech 11

抓取蜘蛛可能对您的目的而言太有限了.如果你需要很多逻辑,你通常最好不要继承Spider.

Scrapy提供了CloseSpider异常,当您需要在某些条件下停止解析时可以引发异常.您正在抓取的页面会返回消息"您的库存中没有焦点文章",当您超过最大页面时,您可以检查此消息并在出现此消息时停止迭代.

在你的情况下,你可以使用这样的东西:

from scrapy.spider import Spider
from scrapy.http import Request
from scrapy.exceptions import CloseSpider

class ExampleSite(Spider):
    name = "so"
    download_delay = 0.1

    more_pages = True
    next_page = 1

    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']

    allowed_domains = ['example.com']

    def create_ajax_request(self, page_number):
        """
        Helper function to create ajax request for next page.
        """
        ajax_template = 'http://example.com/account/ajax_headlines_content?type=in_focus_articles&page={pagenum}&slugs=tsla&is_symbol_page=true'

        url = ajax_template.format(pagenum=page_number)
        return Request(url, callback=self.parse)

    def parse(self, response):
        """
        Parsing of each page.
        """
        if "There are no Focus articles on your stocks." in response.body:
            self.log("About to close spider", log.WARNING)
            raise CloseSpider(reason="no more pages to parse")


        # there is some content extract links to articles
        sel = Selector(response)
        links_xpath = "//div[@class='symbol_article']/a/@href"
        links = sel.xpath(links_xpath).extract()
        for link in links:
            url = urljoin(response.url, link)
            # follow link to article
            # commented out to see how pagination works
            #yield Request(url, callback=self.parse_item)

        # generate request for next page
        self.next_page += 1
        yield self.create_ajax_request(self.next_page)

    def parse_item(self, response):
        """
        Parsing of each article page.
        """
        self.log("Scraping: %s" % response.url, level=log.INFO)

        hxs = Selector(response)

        item = NewsItem()

        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

        yield item
Run Code Online (Sandbox Code Playgroud)

  • 谢谢!我是Scrapy的新手,并认为CrawlSpider是最佳选择.这个例子为我提供了构建的基础. (2认同)