我正在使用scrapy进行项目.我运行以下命令来部署项目:
$scrapy deploy -l
然后我得到以下o/p:
scrapysite           http://localhost:6800/
$cat scrapy.cfg
[settings] 
default = scrapBib.settings
[deploy:scrapysite]  
url = http://localhost:6800/  
project = scrapBib
$scrapy deploy scrapysite -p scrapBib
'Building egg of scrapBib-1346242513
'build/lib.linux-x86_64-2.7' does not exist -- can't clean it
'build/bdist.linux-x86_64' does not exist -- can't clean it
'build/scripts-2.7' does not exist -- can't clean it
zip_safe flag not set; analyzing archive contents...
Deploying scrapBib-1346242513 to `http://localhost:6800/addversion.json`
2012-08-29 17:45:14+0530 [HTTPChannel,22,127.0.0.1] 127.0.0.1 - - [29/Aug/2012:12:15:13 
+0000] "POST /addversion.json HTTP/1.1" 200 79 "-" "Python-urllib/2.7"
Server response (200):
{"status": "ok", "project": "scrapBib", "version": "1346242513", "spiders": 0}
正如你所看到的,将蜘蛛设为0,尽管我在项目/蜘蛛/文件夹中编写了3个蜘蛛.因此,我无法使用curl请求开始抓取.请帮忙
我也曾经遇到过这个问题,做了两件事
1)删除project.egg-info,build,setup.py从您的本地系统.
2)从服务器中删除所有部署的版本.
然后尝试部署它将被修复...