在Scrapinghub上运行Spider时如何保存下载的文件?

it_*_*ure 7 python scrapy

stockInfo.py包含:

import scrapy
import re
import pkgutil

class QuotesSpider(scrapy.Spider):
    name = "stockInfo"
    data = pkgutil.get_data("tutorial", "resources/urls.txt")
    data = data.decode()
    start_urls = data.split("\r\n")

    def parse(self, response):
        company = re.findall("[0-9]{6}",response.url)[0]
        filename = '%s_info.html' % company
        with open(filename, 'wb') as f:
            f.write(response.body)
Run Code Online (Sandbox Code Playgroud)

stockInfo在窗口的cmd中执行蜘蛛程序。

d:
cd  tutorial
scrapy crawl stockInfo
Run Code Online (Sandbox Code Playgroud)

现在,该URL中的所有网页都resources/urls.txt将下载到该目录中d:/tutorial

然后将蜘蛛部署进去Scrapinghub,然后运行stockInfo spider

在此处输入图片说明

没有错误发生,下载的网页在哪里?
以下命令行如何执行Scrapinghub

        with open(filename, 'wb') as f:
            f.write(response.body)
Run Code Online (Sandbox Code Playgroud)

如何将数据保存在scrapinghub中,并在作业完成后从scrapinghub下载?

首先要安装scrapinghub。

pip install scrapinghub[msgpack]
Run Code Online (Sandbox Code Playgroud)

重写Thiago Curvelo一下,将其部署在我的scrapinghub中。

Deploy log location: C:\Users\dreams\AppData\Local\Temp\shub_deploy_yzstvtj8.log
Error: Deploy failed: b'{"status": "error", "message": "Internal error"}'
    _get_apisettings, commands_module='sh_scrapy.commands')
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 148, in _run_usercode
    _run(args, settings)
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 103, in _run
    _run_scrapy(args, settings)
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 111, in _run_scrapy
    execute(settings=settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 148, in execute
    cmd.crawler_process = CrawlerProcess(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 243, in __init__
    super(CrawlerProcess, self).__init__(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 134, in __init__
    self.spider_loader = _get_spider_loader(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
    return loader_cls.from_settings(settings.frozencopy())
  File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 61, in from_settings
    return cls(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 25, in __init__
    self._load_all_spiders()
  File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
    for module in walk_modules(name):
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/app/__main__.egg/mySpider/spiders/stockInfo.py", line 4, in <module>
ImportError: cannot import name ScrapinghubClient
{"message": "shub-image-info exit code: 1", "details": null, "error": "image_info_error"}
{"status": "error", "message": "Internal error"}
Run Code Online (Sandbox Code Playgroud)

requirements.txt仅包含一行:

scrapinghub[msgpack]
Run Code Online (Sandbox Code Playgroud)

scrapinghub.yml包含:

project: 123456
requirements:
  file: requirements.tx
Run Code Online (Sandbox Code Playgroud)

现在部署它。

D:\mySpider>shub deploy 123456
Packing version 1.0
Deploying to Scrapy Cloud project "123456"
Deploy log last 30 lines:

Deploy log location: C:\Users\dreams\AppData\Local\Temp\shub_deploy_4u7kb9ml.log
Error: Deploy failed: b'{"status": "error", "message": "Internal error"}'
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 148, in _run_usercode
    _run(args, settings)
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 103, in _run
    _run_scrapy(args, settings)
  File "/usr/local/lib/python2.7/site-packages/sh_scrapy/crawl.py", line 111, in _run_scrapy
    execute(settings=settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 148, in execute
    cmd.crawler_process = CrawlerProcess(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 243, in __init__
    super(CrawlerProcess, self).__init__(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 134, in __init__
    self.spider_loader = _get_spider_loader(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
    return loader_cls.from_settings(settings.frozencopy())
  File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 61, in from_settings
    return cls(settings)
  File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 25, in __init__
    self._load_all_spiders()
  File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
    for module in walk_modules(name):
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/tmp/unpacked-eggs/__main__.egg/mySpider/spiders/stockInfo.py", line 5, in <module>
    from scrapinghub import ScrapinghubClient
ImportError: cannot import name ScrapinghubClient
{"message": "shub-image-info exit code: 1", "details": null, "error": "image_info_error"}
{"status": "error", "message": "Internal error"}     
Run Code Online (Sandbox Code Playgroud)

1.问题仍然存在。

ImportError: cannot import name ScrapinghubClient
Run Code Online (Sandbox Code Playgroud)

2.我的本地PC上仅安装了python3.7和win7,为什么出现错误信息:

File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
Run Code Online (Sandbox Code Playgroud)

是scrapinghub(远端)上的错误信息吗?只是发送到我的本地端进行显示?

Thi*_*elo 3

如今,在云环境中将数据写入磁盘并不可靠,因为每个人都在使用容器,而且它们是短暂的。

但您可以使用 Scrapinghub 的Collection API保存数据。您可以直接通过端点使用它或使用此包装器:https://python-scrapinghub.readthedocs.io/en/latest/

使用python-scrapinghub,您的代码将如下所示:

from scrapinghub import ScrapinghubClient
from contextlib import closing

project_id = '12345'
apikey = 'XXXX'
client = ScrapinghubClient(apikey)
store = client.get_project(project_id).collections.get_store('mystuff')

#...

    def parse(self, response):
        company = re.findall("[0-9]{6}",response.url)[0]
        with closing(store.create_writer()) as writer:
            writer.write({
                '_key': company, 
                'body': response.body}
            )        
Run Code Online (Sandbox Code Playgroud)

将某些内容保存到集合中后,仪表板中将出现一个链接:

收藏品

编辑:

为了确保依赖项将安装在云 ( scrapinghub[msgpack]) 中,请将它们添加到您的requirements.txtPipfile并将其包含在scrapinghub.yml文件中。例如:

# project_directory/scrapinghub.yml

projects:
  default: 12345

stacks:
  default: scrapy:1.5-py3

requirements:
  file: requirements.txt
Run Code Online (Sandbox Code Playgroud)

https://shub.readthedocs.io/en/stable/deploying.html#deploying-dependencies

因此,scrapinghub(云服务)将安装scrapinghub(python 库)。:)

我希望它对你有帮助。