alv*_*vas 6 url wget scrapy web-scraping
如何从网址列表下载并在每次下载之间启动暂停?
我有一个网址列表url.txt,例如
http://manuals.info.apple.com/cs_CZ/Apple_TV_2nd_gen_Setup_Guide_cz.pdf
http://manuals.info.apple.com/cs_CZ/apple_tv_3rd_gen_setup_cz.pdf
http://manuals.info.apple.com/cs_CZ/imac_late2012_quickstart_cz.pdf
http://manuals.info.apple.com/cs_CZ/ipad_4th-gen-ipad-mini_info_cz.pdf
http://manuals.info.apple.com/cs_CZ/iPad_iOS4_Important_Product_Info_CZ.pdf
http://manuals.info.apple.com/cs_CZ/iPad_iOS4_Uzivatelska_prirucka.pdf
http://manuals.info.apple.com/cs_CZ/ipad_ios5_uzivatelska_prirucka.pdf
http://manuals.info.apple.com/cs_CZ/ipad_ios6_user_guide_cz.pdf
http://manuals.info.apple.com/cs_CZ/ipad_uzivatelska_prirucka.pdf
Run Code Online (Sandbox Code Playgroud)
我尝试过,wget -i url.txt但一段时间后停止,因为服务器正在检测不友好的爬行.
如何在每个网址之间放置暂停?
我如何用scrapy做到这一点?
kev*_*kev 10
wgetwget --wait=10 --random-wait --input-file=url.txt
Run Code Online (Sandbox Code Playgroud)
scrapyscrapy crawl yourbot -s DOWNLOAD_DELAY=10 -s RANDOMIZE_DOWNLOAD_DELAY=1
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3293 次 |
| 最近记录: |