小编akh*_*hi1的帖子

无法在heroku django中导入名称_uuid_generate_random

我正在开发一个扫描用户gmail收件箱并提供报告的项目.我已经在heroku中部署了以下规范:

语言:Python 2.7

框架:Django 1.8

任务调度程序:Celery(Rabbitmq-bigwig for broker url)

现在当heroku执行它时,芹菜没有给我输出.在Heroku推动其显示Collectstatic配置错误.我尝试过使用whitenoise包

还尝试执行:heroku运行python manage.py collectstatic --dry-run --noinput 仍然得到相同的错误.

$ heroku运行python manage.py collectstatic --noinput给出了错误的以下细节.

File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/app/.heroku/python/lib/python2.7/site-packages/django/core/management/__init__.py", line 303, in execute
settings.INSTALLED_APPS
File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 48, in __getattr__
self._setup(name)
File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "/app/.heroku/python/lib/python2.7/site-packages/django/conf/__init__.py", line 92, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File …
Run Code Online (Sandbox Code Playgroud)

python django heroku celery

57
推荐指数
2
解决办法
3万
查看次数

scrapy中的python.failure.Failure OpenSSL.SSL.Error(版本1.0.4)

我正在研究数据抓取项目,我的代码使用Scrapy(版本1.0.4)和Selenium(版本2.47.1).

from scrapy import Spider
from scrapy.selector import Selector
from scrapy.http import Request
from scrapy.spiders import CrawlSpider
from selenium import webdriver

class TradesySpider(CrawlSpider):
    name = 'tradesy'
    start_urls = ['My Start url',]

    def __init__(self):
        self.driver = webdriver.Firefox()

    def parse(self, response):
        self.driver.get(response.url)
        while True:
           tradesy_urls = Selector(response).xpath('//div[@id="right-panel"]"]')
           data_urls = tradesy_urls.xpath('div[@class="item streamline"]/a/@href').extract()
           for link in data_urls:
               url = 'My base url'+link
               yield Request(url=url,callback=self.parse_data)
               time.sleep(10)
           try:
               data_path = self.driver.find_element_by_xpath('//*[@id="page-next"]')
           except:
               break
           data_path.click()
           time.sleep(10)

    def parse_data(self,response):
        'Scrapy …
Run Code Online (Sandbox Code Playgroud)

ssl scrapy python-2.7

7
推荐指数
1
解决办法
4709
查看次数

标签 统计

celery ×1

django ×1

heroku ×1

python ×1

python-2.7 ×1

scrapy ×1

ssl ×1