Sja*_*aak 6 python scrapy scrapyd
我正在运行Scrapyd并在同时启动4个蜘蛛时遇到一个奇怪的问题.
2012-02-06 15:27:17+0100 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,2,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:17+0100 [HTTPChannel,3,127.0.0.1] 127.0.0.1 - - [06/Feb/2012:14:27:16 +0000] "POST /schedule.json HTTP/1.1" 200 62 "-" "python-requests/0.10.1"
2012-02-06 15:27:18+0100 [Launcher] Process started: project='thz' spider='spider_1' job='abb6b62650ce11e19123c8bcc8cc6233' pid=2545
2012-02-06 15:27:19+0100 [Launcher] Process finished: project='thz' spider='spider_1' job='abb6b62650ce11e19123c8bcc8cc6233' pid=2545
2012-02-06 15:27:23+0100 [Launcher] Process started: project='thz' spider='spider_2' job='abb72f8e50ce11e19123c8bcc8cc6233' pid=2546
2012-02-06 15:27:24+0100 [Launcher] Process finished: project='thz' spider='spider_2' job='abb72f8e50ce11e19123c8bcc8cc6233' pid=2546
2012-02-06 15:27:28+0100 [Launcher] Process started: project='thz' spider='spider_3' job='abb76f6250ce11e19123c8bcc8cc6233' pid=2547
2012-02-06 15:27:29+0100 [Launcher] Process finished: project='thz' spider='spider_3' job='abb76f6250ce11e19123c8bcc8cc6233' pid=2547
2012-02-06 15:27:33+0100 [Launcher] Process started: project='thz' spider='spider_4' job='abb7bb8e50ce11e19123c8bcc8cc6233' pid=2549
2012-02-06 15:27:35+0100 [Launcher] Process finished: project='thz' spider='spider_4' job='abb7bb8e50ce11e19123c8bcc8cc6233' pid=2549
Run Code Online (Sandbox Code Playgroud)
我已经为Scrapyd设置了这些设置:
[scrapyd]
max_proc = 10
Run Code Online (Sandbox Code Playgroud)
为什么Scrapyd不会同时运行蜘蛛,就像他们预定的那样快?
我已经通过编辑第30行的scrapyd/app.py解决了这个问题.
改变timer = TimerService(5, poller.poll)以 timer = TimerService(0.1, poller.poll)
编辑:AliBZ下面关于配置设置的评论是更改轮询频率的更好方法.
根据我使用 scrapyd 的经验,它不会像您安排的那样立即运行蜘蛛。它通常会稍等片刻,直到当前蜘蛛启动并运行,然后才启动下一个蜘蛛进程 ( scrapy crawl)。
因此,scrapyd 会一个一个地启动进程,直到max_proc达到计数为止。
从您的日志中,我看到您的每个蜘蛛都运行了大约 1 秒。我认为,如果它们至少运行 30 秒,您将看到所有蜘蛛都在运行。
| 归档时间: |
|
| 查看次数: |
3656 次 |
| 最近记录: |