Gio*_*Gio 7 html screen-scraping login scrapy
您好,我怎样才能让我的爬行器工作,我能够登录但没有任何反应我真的不会刮不到.我也一直在阅读scrapy doc,我真的不明白用来刮擦的规则.为什么在"成功登录后没有任何事情发生.让我们开始抓取!"
我在我的else语句结束时也有这个规则,但删除它因为它甚至没有被调用,因为它在我的else块中.所以我把它移到了start_request()方法的顶部,但是出现了错误,所以我删除了我的规则.
rules = (
Rule(extractor,callback='parse_item',follow=True),
)
Run Code Online (Sandbox Code Playgroud)
我的代码:
from scrapy.contrib.spiders.init import InitSpider
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import Rule
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from linkedconv.items import LinkedconvItem
class LinkedPySpider(CrawlSpider):
name = 'LinkedPy'
allowed_domains = ['linkedin.com']
login_page = 'https://www.linkedin.com/uas/login'
# start_urls = ["http://www.linkedin.com/csearch/results?type=companies&keywords=&pplSearchOrigin=GLHD&pageKey=member-home&search=Search#facets=pplSearchOrigin%3DFCTD%26keywords%3D%26search%3DSubmit%26facet_CS%3DC%26facet_I%3D80%26openFacets%3DJO%252CN%252CCS%252CNFR%252CF%252CCCR%252CI"]
start_urls = ["http://www.linkedin.com/csearch/results"]
def start_requests(self):
yield Request(
url=self.login_page,
callback=self.login,
dont_filter=True
)
# def init_request(self):
#"""This function is called before crawling starts."""
# return Request(url=self.login_page, callback=self.login)
def login(self, response):
#"""Generate a login request."""
return FormRequest.from_response(response,
formdata={'session_key': 'myemail@gmail.com', 'session_password': 'mypassword'},
callback=self.check_login_response)
def check_login_response(self, response):
#"""Check the response returned by a login request to see if we aresuccessfully logged in."""
if "Sign Out" in response.body:
self.log("\n\n\nSuccessfully logged in. Let's start crawling!\n\n\n")
# Now the crawling can begin..
self.log('Hi, this is an item page! %s' % response.url)
return
else:
self.log("\n\n\nFailed, Bad times :(\n\n\n")
# Something went wrong, we couldn't log in, so nothing happens.
def parse_item(self, response):
self.log("\n\n\n We got data! \n\n\n")
self.log('Hi, this is an item page! %s' % response.url)
hxs = HtmlXPathSelector(response)
sites = hxs.select('//ol[@id=\'result-set\']/li')
items = []
for site in sites:
item = LinkedconvItem()
item['title'] = site.select('h2/a/text()').extract()
item['link'] = site.select('h2/a/@href').extract()
items.append(item)
return items
Run Code Online (Sandbox Code Playgroud)
MyOutput中
C:\Users\ye831c\Documents\Big Data\Scrapy\linkedconv>scrapy crawl LinkedPy
2013-07-12 13:39:40-0500 [scrapy] INFO: Scrapy 0.16.5 started (bot: linkedconv)
2013-07-12 13:39:40-0500 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetCon
sole, CloseSpider, WebService, CoreStats, SpiderState
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Enabled downloader middlewares: HttpAut
hMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, De
faultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMi
ddleware, ChunkedTransferMiddleware, DownloaderStats
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMi
ddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddle
ware
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Enabled item pipelines:
2013-07-12 13:39:41-0500 [LinkedPy] INFO: Spider opened
2013-07-12 13:39:41-0500 [LinkedPy] INFO: Crawled 0 pages (at 0 pages/min), scra
ped 0 items (at 0 items/min)
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:602
3
2013-07-12 13:39:41-0500 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-07-12 13:39:41-0500 [LinkedPy] DEBUG: Crawled (200) <GET https://www.linked
in.com/uas/login> (referer: None)
2013-07-12 13:39:42-0500 [LinkedPy] DEBUG: Redirecting (302) to <GET http://www.
linkedin.com/nhome/> from <POST https://www.linkedin.com/uas/login-submit>
2013-07-12 13:39:45-0500 [LinkedPy] DEBUG: Crawled (200) <GET http://www.linkedi
n.com/nhome/> (referer: https://www.linkedin.com/uas/login)
2013-07-12 13:39:45-0500 [LinkedPy] DEBUG:
Successfully logged in. Let's start crawling!
2013-07-12 13:39:45-0500 [LinkedPy] DEBUG: Hi, this is an item page! http://www.
linkedin.com/nhome/
2013-07-12 13:39:45-0500 [LinkedPy] INFO: Closing spider (finished)
2013-07-12 13:39:45-0500 [LinkedPy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1670,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 2,
'downloader/request_method_count/POST': 1,
'downloader/response_bytes': 65218,
'downloader/response_count': 3,
'downloader/response_status_count/200': 2,
'downloader/response_status_count/302': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 7, 12, 18, 39, 45, 136000),
'log_count/DEBUG': 11,
'log_count/INFO': 4,
'request_depth_max': 1,
'response_received_count': 2,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2013, 7, 12, 18, 39, 41, 50000)}
2013-07-12 13:39:45-0500 [LinkedPy] INFO: Spider closed (finished)
Run Code Online (Sandbox Code Playgroud)
现在,爬行结束了,check_login_response()因为没有告诉Scrapy做更多的事情.
start_requests()以下命令登录页面:OKcheck_login_response... 解析,就是这样确实check_login_response()没有回报.为了保持爬行,您需要返回Request实例(告诉Scrapy接下来要获取哪些页面,请参阅Spiders回调中的Scrapy文档)
因此,在内部check_login_response(),您需要将Request实例返回到包含您要接下来要爬网的链接的起始页面,可能是您定义的一些URL start_urls.
def check_login_response(self, response):
#"""Check the response returned by a login request to see if we aresuccessfully logged in."""
if "Sign Out" in response.body:
self.log("\n\n\nSuccessfully logged in. Let's start crawling!\n\n\n")
# Now the crawling can begin..
return Request(url='http://linkedin.com/page/containing/links')
Run Code Online (Sandbox Code Playgroud)
默认情况下,如果您没有为自己设置回调Request,则蜘蛛会调用其parse()方法(http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spider.BaseSpider.parse).
在您的情况下,它会自动为您调用CrawlSpider内置parse()方法,它会应用Rule您已定义的s来获取下一页.
你必须定义CrawlSpider一个内规则rules属性的你蜘蛛类,就像你做了name,allowed_domain等在同一水平.
http://doc.scrapy.org/en/latest/topics/spiders.html#crawlspider-example提供了示例规则.主要的想法是你使用正则表达式告诉提取器你在页面中感兴趣的是什么类型的绝对URL allow.如果你没有设置allow你的SgmlLinkExtractor,它会匹配所有链接.
在您的情况下,每个规则都应该有一个回调用于这些链接parse_item().
祝你解析LinkedIn页面好运,我认为页面中很多内容都是通过Javascript生成的,可能不在Scrapy提取的HTML内容中.