您好,我怎样才能让我的爬行器工作,我能够登录但没有任何反应我真的不会刮不到.我也一直在阅读scrapy doc,我真的不明白用来刮擦的规则.为什么在"成功登录后没有任何事情发生.让我们开始抓取!"
我在我的else语句结束时也有这个规则,但删除它因为它甚至没有被调用,因为它在我的else块中.所以我把它移到了start_request()方法的顶部,但是出现了错误,所以我删除了我的规则.
rules = (
Rule(extractor,callback='parse_item',follow=True),
)
Run Code Online (Sandbox Code Playgroud)
我的代码:
from scrapy.contrib.spiders.init import InitSpider
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import Rule
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from linkedconv.items import LinkedconvItem
class LinkedPySpider(CrawlSpider):
name = 'LinkedPy'
allowed_domains = ['linkedin.com']
login_page = 'https://www.linkedin.com/uas/login'
# start_urls = ["http://www.linkedin.com/csearch/results?type=companies&keywords=&pplSearchOrigin=GLHD&pageKey=member-home&search=Search#facets=pplSearchOrigin%3DFCTD%26keywords%3D%26search%3DSubmit%26facet_CS%3DC%26facet_I%3D80%26openFacets%3DJO%252CN%252CCS%252CNFR%252CF%252CCCR%252CI"]
start_urls = ["http://www.linkedin.com/csearch/results"]
def start_requests(self):
yield Request(
url=self.login_page,
callback=self.login,
dont_filter=True
)
# def init_request(self):
#"""This function is called before crawling starts."""
# …Run Code Online (Sandbox Code Playgroud) 我如何将下面这个工作示例转换为crawlSpider并深入爬行,而不仅仅是第一个主页,而是深入.这个例子工作正常没有错误,但我想使用crawlspider而不是InitSpider和deeply爬行.提前致谢
from scrapy.contrib.spiders.init import InitSpider
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import Rule
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from linkedpy.items import LinkedpyItem
class LinkedPySpider(InitSpider):
name = 'LinkedPy'
allowed_domains = ['linkedin.com']
login_page = 'https://www.linkedin.com/uas/login'
start_urls = ["http://www.linkedin.com/csearch/results"]
def init_request(self):
#"""This function is called before crawling starts."""
return Request(url=self.login_page, callback=self.login)
def login(self, response):
#"""Generate a login request."""
return FormRequest.from_response(response,
formdata={'session_key': 'xxxx@gmail.com', 'session_password': 'xxxxx'},
callback=self.check_login_response)
def check_login_response(self, response):
#"""Check the response returned by a login …Run Code Online (Sandbox Code Playgroud)