我是蟒蛇和scrapy的新手.我使用了这个博客中的方法以编程方式运行多个scrapy spiders在一个烧瓶应用程序中运行我的蜘蛛.这是代码:
# list of crawlers
TO_CRAWL = [DmozSpider, EPGDspider, GDSpider]
# crawlers that are running
RUNNING_CRAWLERS = []
def spider_closing(spider):
"""
Activates on spider closed signal
"""
log.msg("Spider closed: %s" % spider, level=log.INFO)
RUNNING_CRAWLERS.remove(spider)
if not RUNNING_CRAWLERS:
reactor.stop()
# start logger
log.start(loglevel=log.DEBUG)
# set up the crawler and start to crawl one spider at a time
for spider in TO_CRAWL:
settings = Settings()
# crawl responsibly
settings.set("USER_AGENT", "Kiran Koduru (+http://kirankoduru.github.io)")
crawler = Crawler(settings)
crawler_obj = spider()
RUNNING_CRAWLERS.append(crawler_obj)
# stop reactor when spider closes
crawler.signals.connect(spider_closing, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(crawler_obj)
crawler.start()
# blocks process; so always keep as the last statement
reactor.run()
Run Code Online (Sandbox Code Playgroud)
这是我的蜘蛛代码:
class EPGDspider(scrapy.Spider):
name = "EPGD"
allowed_domains = ["epgd.biosino.org"]
term = "man"
start_urls = ["http://epgd.biosino.org/EPGD/search/textsearch.jsp?textquery="+term+"&submit=Feeling+Lucky"]
MONGODB_DB = name + "_" + term
MONGODB_COLLECTION = name + "_" + term
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//tr[@class="odd"]|//tr[@class="even"]')
url_list = []
base_url = "http://epgd.biosino.org/EPGD"
for site in sites:
item = EPGD()
item['genID'] = map(unicode.strip, site.xpath('td[1]/a/text()').extract())
item['genID_url'] = base_url+map(unicode.strip, site.xpath('td[1]/a/@href').extract())[0][2:]
item['taxID'] = map(unicode.strip, site.xpath('td[2]/a/text()').extract())
item['taxID_url'] = map(unicode.strip, site.xpath('td[2]/a/@href').extract())
item['familyID'] = map(unicode.strip, site.xpath('td[3]/a/text()').extract())
item['familyID_url'] = base_url+map(unicode.strip, site.xpath('td[3]/a/@href').extract())[0][2:]
item['chromosome'] = map(unicode.strip, site.xpath('td[4]/text()').extract())
item['symbol'] = map(unicode.strip, site.xpath('td[5]/text()').extract())
item['description'] = map(unicode.strip, site.xpath('td[6]/text()').extract())
yield item
sel_tmp = Selector(response)
link = sel_tmp.xpath('//span[@id="quickPage"]')
for site in link:
url_list.append(site.xpath('a/@href').extract())
for i in range(len(url_list[0])):
if cmp(url_list[0][i], "#") == 0:
if i+1 < len(url_list[0]):
print url_list[0][i+1]
actual_url = "http://epgd.biosino.org/EPGD/search/"+ url_list[0][i+1]
yield Request(actual_url, callback=self.parse)
break
else:
print "The index is out of range!"
Run Code Online (Sandbox Code Playgroud)
如您所见,term = 'man'我的代码中有一个参数,它是我的一部分start urls.我不希望修复此参数,所以我想知道如何在程序中动态地给出start url或参数term?就像在命令行中运行蜘蛛一样,有一种方法可以传递参数如下:
class MySpider(BaseSpider):
name = 'my_spider'
def __init__(self, *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)
self.start_urls = [kwargs.get('start_url')]
And start it like: scrapy crawl my_spider -a start_url="http://some_url"
Run Code Online (Sandbox Code Playgroud)
谁能告诉我如何处理这件事?
首先,运行一个脚本多的蜘蛛,推荐的方法是使用scrapy.crawler.CrawlerProcess,在那里你通过蜘蛛类,而不是蜘蛛实例.
要将参数传递给您的蜘蛛CrawlerProcess,您只需要.crawl()在蜘蛛子类之后将参数添加到调用中,例如
process.crawl(DmozSpider, term='someterm', someotherterm='anotherterm')
Run Code Online (Sandbox Code Playgroud)
这种方式传递的参数随后可用作蜘蛛属性(与-a term=someterm命令行相同)
最后,而不是建立start_urls在__init__,就可以实现与相同start_requests,并且你可以建立这样的初始请求,使用self.term:
def start_requests(self):
yield Request("http://epgd.biosino.org/"
"EPGD/search/textsearch.jsp?"
"textquery={}"
"&submit=Feeling+Lucky".format(self.term))
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3089 次 |
| 最近记录: |