小编sin*_*000的帖子

无法让Scrapy跟随链接

我试图刮一个网站,但我不能scrapy跟随链接,我没有得到任何Python错误,我看不到任何与Wireshark.我认为它可能是正则表达式,但我尝试".*"尝试遵循任何链接,但它也不起作用.方法"解析"确实有效,但我需要遵循"sinopsis.aspx"和回调parse_peliculas.

编辑:在评论解析方法获取规则的工作... parse_peliculas获取运行,我有什么待办事项现在是改变的解析方法的另一个名字,并有回调的规则,但我仍然不能得到它的工作.

这是我的蜘蛛代码:

import re

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from Cinesillo.items import CinemarkItem, PeliculasItem

class CinemarkSpider(CrawlSpider):
    name = 'cinemark'
    allowed_domains = ['cinemark.com.mx']
    start_urls = ['http://www.cinemark.com.mx/smartphone/iphone/vercartelera.aspx?fecha=&id_theater=555',
                  'http://www.cinemark.com.mx/smartphone/iphone/vercartelera.aspx?fecha=&id_theater=528']


    rules = (Rule(SgmlLinkExtractor(allow=(r'sinopsis.aspx.*', )), callback='parse_peliculas', follow=True),)

    def parse(self, response):
        item = CinemarkItem()
        hxs = HtmlXPathSelector(response)
        cine = hxs.select('(//td[@class="title2"])[1]')
        direccion = hxs.select('(//td[@class="title2"])[2]')

        item['nombre'] = cine.select('text()').extract()
        item['direccion'] = direccion.select('text()').extract()
        return item

    def parse_peliculas(self, response):
        item = PeliculasItem()
        hxs = HtmlXPathSelector(response)
        titulo = hxs.select('//td[@class="pop_up_title"]')
        item['titulo'] = titulo.select('text()').extract() …
Run Code Online (Sandbox Code Playgroud)

python regex screen-scraping scrapy

2
推荐指数
1
解决办法
3850
查看次数

标签 统计

python ×1

regex ×1

scrapy ×1

screen-scraping ×1