Geo*_*rge 3 python scrapy scrapy-spider
我正在创建一个爬虫,它接受用户输入并抓取网站上的所有链接。但是,我只需要限制对来自该域的链接的链接的抓取和提取,而不是外部域。就爬虫而言,我把它放到了我需要的地方。我的问题是,对于我的 allowed_domains 函数,我似乎无法传入通过命令放入的 scrapy 选项。Bellow 是第一个运行的脚本:
# First Script
import os
def userInput():
user_input = raw_input("Please enter URL. Please do not include http://: ")
os.system("scrapy runspider -a user_input='http://" + user_input + "' crawler_prod.py")
userInput()
Run Code Online (Sandbox Code Playgroud)
它运行的脚本是爬虫,爬虫将爬取给定的域。下面是爬虫代码:
#Crawler
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import Request
from scrapy.http import Request
class InputSpider(CrawlSpider):
name = "Input"
#allowed_domains = ["example.com"]
def allowed_domains(self):
self.allowed_domains = user_input
def start_requests(self):
yield Request(url=self.user_input)
rules = [
Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
]
def parse_item(self, response):
x = HtmlXPathSelector(response)
filename = "output.txt"
open(filename, 'ab').write(response.url + "\n")
Run Code Online (Sandbox Code Playgroud)
我曾尝试产生通过终端命令发送的请求,但是这会导致爬虫崩溃。我现在如何拥有它也会使爬虫崩溃。我也试过只放入allowed_domains=[user_input]
它,它向我报告它没有定义。我正在使用 Scrapy 的 Request 库来让它在没有运气的情况下工作。有没有更好的方法来限制在给定域之外的爬行?
编辑:
这是我的新代码:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.spiders import BaseSpider
from scrapy import Request
from scrapy.http import Request
from scrapy.utils.httpobj import urlparse
#from run_first import *
class InputSpider(CrawlSpider):
name = "Input"
#allowed_domains = ["example.com"]
#def allowed_domains(self):
#self.allowed_domains = user_input
#def start_requests(self):
#yield Request(url=self.user_input)
def __init__(self, *args, **kwargs):
inputs = kwargs.get('urls', '').split(',') or []
self.allowed_domains = [urlparse(d).netloc for d in inputs]
# self.start_urls = [urlparse(c).netloc for c in inputs] # For start_urls
rules = [
Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
]
def parse_item(self, response):
x = HtmlXPathSelector(response)
filename = "output.txt"
open(filename, 'ab').write(response.url + "\n")
Run Code Online (Sandbox Code Playgroud)
这是新代码的输出日志
2017-04-18 18:18:01 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:01 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:01 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2017-04-18 18:18:43 [scrapy] INFO: Optional features available: ssl, http11, boto
2017-04-18 18:18:43 [scrapy] INFO: Overridden settings: {'LOG_FILE': 'output.log'}
2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:1: ScrapyDeprecationWarning: Module `scrapy.contrib.spiders` is deprecated, use `scrapy.spiders` instead
from scrapy.contrib.spiders import CrawlSpider, Rule
2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:2: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors.sgml` is deprecated, use `scrapy.linkextractors.sgml` instead
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
2017-04-18 18:18:43 [py.warnings] WARNING: /home/****-you/Python_Projects/Network-Multitool/crawler/crawler_prod.py:27: ScrapyDeprecationWarning: SgmlLinkExtractor is deprecated and will be removed in future releases. Please use scrapy.linkextractors.LinkExtractor
Rule(SgmlLinkExtractor(allow=()), follow=True, callback='parse_item')
2017-04-18 18:18:43 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2017-04-18 18:18:43 [boto] DEBUG: Retrieving credentials from metadata server.
2017-04-18 18:18:44 [boto] ERROR: Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2017-04-18 18:18:44 [boto] ERROR: Unable to read instance data, giving up
2017-04-18 18:18:44 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2017-04-18 18:18:44 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2017-04-18 18:18:44 [scrapy] INFO: Enabled item pipelines:
2017-04-18 18:18:44 [scrapy] INFO: Spider opened
2017-04-18 18:18:44 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-18 18:18:44 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-18 18:18:44 [scrapy] ERROR: Error while obtaining start requests
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/scrapy/core/engine.py", line 110, in _next_request
request = next(slot.start_requests)
File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 70, in start_requests
yield self.make_requests_from_url(url)
File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 73, in make_requests_from_url
return Request(url, dont_filter=True)
File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 24, in __init__
self._set_url(url)
File "/usr/lib/python2.7/dist-packages/scrapy/http/request/__init__.py", line 59, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url:
2017-04-18 18:18:44 [scrapy] INFO: Closing spider (finished)
2017-04-18 18:18:44 [scrapy] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 794155),
'log_count/DEBUG': 2,
'log_count/ERROR': 3,
'log_count/INFO': 7,
'start_time': datetime.datetime(2017, 4, 18, 22, 18, 44, 790331)}
2017-04-18 18:18:44 [scrapy] INFO: Spider closed (finished)
Run Code Online (Sandbox Code Playgroud)
编辑:
通过查看答案并重新阅读文档,我能够找出我的问题的答案。下面是我添加到爬虫脚本中以使其工作的内容。
def __init__(self, url=None, *args, **kwargs):
super(InputSpider, self).__init__(*args, **kwargs)
self.allowed_domains = [url]
self.start_urls = ["http://" + url]
Run Code Online (Sandbox Code Playgroud)
您在这里缺少一些东西。
allowed_domains
一旦运行开始,您就无法覆盖。 要处理这些问题,您需要编写自己的offiste 中间件,或者至少根据您需要的更改修改现有的中间件。
什么情况是OffsiteMiddleware
,处理allowed_domains
转换allowed_domains
一旦蜘蛛值的正则表达式字符串打开,然后将该参数不会再使用。
给你添加这样的东西middlewares.py
:
from scrapy.spidermiddlewares.offsite import OffsiteMiddleware
from scrapy.utils.httpobj import urlparse_cached
class MyOffsiteMiddleware(OffsiteMiddleware):
def should_follow(self, request, spider):
"""Return bool whether to follow a request"""
# hostname can be None for wrong urls (like javascript links)
host = urlparse_cached(request).hostname or ''
if host in spider.allowed_domains:
return True
return False
Run Code Online (Sandbox Code Playgroud)
激活它setting.py
:
SPIDER_MIDDLEWARES = {
# enable our middleware
'myspider.middlewares.MyOffsiteMiddleware': 500,
# disable old middleware
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None,
}
Run Code Online (Sandbox Code Playgroud)
现在你的蜘蛛应该跟随你拥有的任何东西allowed_domains
,即使你在运行中修改它。
编辑:对于您的情况:
from scrapy.utils.httpobj import urlparse
class MySpider(Spider):
def __init__(self, *args, **kwargs):
input = kwargs.get('urls', '').split(',') or []
self.allowed_domains = [urlparse(d).netloc for d in input]
Run Code Online (Sandbox Code Playgroud)
现在你可以运行:
scrapy crawl myspider -a "urls=foo.com,bar.com"
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
3124 次 |
最近记录: |