Che*_*hie 1 xpath web-crawler scrapy web-scraping
我正在用Scrapy编写一个网络爬虫来下载某个网页上的回复文本.
以下是网页背后代码的相关部分,用于特定的对讲:
<div id="site_comment_71339" class="site_comment site_comment-even large high-rank">
<div class="talkback-topic">
<a class="show-comment" data-ajax-url="/comments/71339.js?counter=97&num=57" href="/comments/71339?counter=97&num=57">57. talk back title here </a>
</div>
<div class="talkback-message"> blah blah blah talk-back message here </div>
....etc etc etc ......
Run Code Online (Sandbox Code Playgroud)
在编写XPath以获取消息时:
titles = hxs.xpath("//div[@class='site_comment site_comment-even large high-rank']")
Run Code Online (Sandbox Code Playgroud)
后来:
item["title"] = titles.xpath("div[@class='talkback-message']text()").extract()
Run Code Online (Sandbox Code Playgroud)
没有错误,但它不起作用.有什么想法吗?我想我没有正确编写路径,但我找不到错误.
谢谢 :)
整个代码:
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from craigslist_sample.items import CraigslistSampleItem
class MySpider(BaseSpider):
name = "craig"
allowed_domains = ["tbk.co.il"]
start_urls = ["http://www.tbk.co.il/tag/%D7%91%D7%A0%D7%99%D7%9E%D7%99%D7%9F_%D7%A0%D7%AA%D7%A0%D7%99%D7%94%D7%95/talkbacks"]
def parse(self, response):
hxs = Selector(response)
titles = hxs.xpath("//div[@class='site_comment site_comment-even large high-rank']")
items=[]
for titles in titles:
item = CraigslistSampleItem()
item["title"] = titles.xpath("div[@class='talkback-message']text()").extract()
items.append(item)
return items
Run Code Online (Sandbox Code Playgroud)
这是HTML页面的片段 #site_comment_74240
<div class="site_comment site_comment-even small normal-rank" id="site_comment_74240">
<div class="talkback-topic">
<a href="/comments/74240?counter=1&num=144" class="show-comment" data-ajax-url="/comments/74240.js?counter=1&num=144">144. ???????</a>
</div>
<div class="talkback-username">
<table><tr>
<td>??????? ???? </td>
<td>(01.11.2013)</td>
</tr></table>
</div>
Run Code Online (Sandbox Code Playgroud)
div首次获取它时,"talkback-message" 不在HTML页面中,而是在您单击注释标题时通过某些AJAX查询异步获取,因此您必须为每个注释获取它.
titles你的代码片段中的注释块可以使用像这样的XPath来获取://div[starts-with(@id, "site_comment_"]),即所有div具有以字符串""site_comment_"开头的"id"属性的s
您也可以使用CSS选择器Selector.css().在您的情况下,您可以使用"id"方法(正如我上面使用XPath所做的那样)获取注释块,因此:
titles = sel.css("div[id^=site_comment_]")
Run Code Online (Sandbox Code Playgroud)
或者使用"site_comment"类,而不使用其他"site_comment-even","site_comment-odd","small","normal-rank"或"high-rank"等不同的类:
titles = sel.css("div.site_comment")
Run Code Online (Sandbox Code Playgroud)
然后,您将Request使用该./div[@class="talkback-topic"]/a[@class="show-comment"]/@data-ajax-url注释中的URL 发出新内容div.或者使用CSS选择器,div.talkback-topic > a.show-comment::attr(data-ajax-url)(顺便说一下,::attr(...)它不是标准的,但是使用伪元素函数的CSS选择器的Scrapy扩展)
你从AJAX调用得到的是一些Javascript代码,你想要获取内部的内容 old.after(...)
var old = $("#site_comment_72765");
old.attr('id', old.attr('id') + '_small');
old.hide();
old.after("\n<div class=\"site_comment site_comment-odd large high-rank\" id=\"site_comment_72765\">\n <div class=\"talkback-topic\">\n <a href=\"/comments/72765?counter=42&num=109\" class=\"show-comment\" data-ajax-url=\"/comments/72765.js?counter=42&num=109\">109. ???? - ???? ????? ???? ????? ?????(??)<\/a>\n <\/div>\n \n <div class=\"talkback-message\">\n \n <\/div>\n \n <div class=\"talkback-username\">\n <table><tr>\n <td>????? <\/td>\n <td>(11.03.2012)<\/td>\n <\/tr><\/table>\n <\/div>\n <div class=\"rank-controllers\">\n <table><tr>\n \n <td class=\"rabk-link\"><a href=\"#\" data-thumb=\"/comments/72765/thumb?type=up\"><img alt=\"\" src=\"/images/elements/thumbU.png?1376839523\" /><\/a><\/td>\n <td> | <\/td>\n <td class=\"rabk-link\"><a href=\"#\" data-thumb=\"/comments/72765/thumb?type=down\"><img alt=\"\" src=\"/images/elements/thumbD.png?1376839523\" /><\/a><\/td>\n \n <td> | <\/td>\n <td>11<\/td>\n \n <\/tr><\/table>\n <\/div>\n \n <div class=\"talkback-links\">\n <a href=\"/comments/new?add_to_root=true&html_id=site_comment_72765&sibling_id=72765\">????? ????<\/a>\n \n <a href=\"/comments/72765/comments/new?html_id=site_comment_72765\">????? ??????<\/a>\n \n <a href=\"/i/offensive?comment_id=72765\" data-noajax=\"true\">????? ???? ??????<\/a>\n <\/div>\n \n<\/div>");
var new_comment = $("#site_comment_72765");
Run Code Online (Sandbox Code Playgroud)
这是HTML数据,您需要使用某些东西Selector(text=this_ajax_html_data)和.//div[@class="talkback-message"]//text()XPath或div.talkback-message ::textCSS选择器再次解析
这是一个骷髅蜘蛛,让你了解这些想法:
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from scrapy.http import Request
from craigslist_sample.items import CraigslistSampleItem
import urlparse
import re
class MySpider(BaseSpider):
name = "craig"
allowed_domains = ["tbk.co.il"]
start_urls = ["http://www.tbk.co.il/tag/%D7%91%D7%A0%D7%99%D7%9E%D7%99%D7%9F_%D7%A0%D7%AA%D7%A0%D7%99%D7%94%D7%95/talkbacks"]
def parse(self, response):
sel = Selector(response)
comments = sel.css("div.site_comment")
for comment in comments:
item = CraigslistSampleItem()
# this probably has to be fixed
#item["title"] = comment.xpath("div[@class='talkback-message']text()").extract()
# issue an additional request to fetch the Javascript
# data containing the comment text
# and pass the incomplete item via meta dict
for url in comment.css('div.talkback-topic > a.show-comment::attr(data-ajax-url)').extract():
yield Request(url=urlparse.urljoin(response.url, url),
callback=self.parse_javascript_comment,
meta={"item": item})
break
# the line we are looking for begins with "old.after"
# and we want everythin inside the parentheses
_re_comment_html = re.compile(r'^old\.after\((?P<html>.+)\);$')
def parse_javascript_comment(self, response):
item = response.meta["item"]
# loop on Javascript content lines
for line in response.body.split("\n"):
matching = self._re_comment_html.search(line.strip())
if matching:
# what's inside the parentheses is a Javascript strings
# with escaped double-quotes
# a simple way to decode that into a Python string
# is to use eval()
# then there are these "<\/tag>" we want to remove
html = eval(matching.group("html")).replace(r"<\/", "</")
# once we have the HTML snippet, decode it using Selector()
decoded = Selector(text=html, type="html")
# and save the message text in the item
item["message"] = u''.join(decoded.css('div.talkback-message ::text').extract()).strip()
# and return it
return item
Run Code Online (Sandbox Code Playgroud)
你可以尝试一下scrapy runspider tbkspider.py.
| 归档时间: |
|
| 查看次数: |
4889 次 |
| 最近记录: |