我正在尝试使用Python登录网站并从几个网页收集信息,我收到以下错误:
Run Code Online (Sandbox Code Playgroud)Traceback (most recent call last): File "extract_test.py", line 43, in <module> response=br.open(v) File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open return self._mech_open(url, data, timeout=timeout) File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in _mech_open raise response mechanize._response.httperror_seek_wrapper: HTTP Error 429: Unknown Response Code
我用time.sleep()它并且它有效,但它似乎不聪明和不可靠,有没有其他方法来躲避这个错误?
这是我的代码:
import mechanize
import cookielib
import re
first=("example.com/page1")
second=("example.com/page2")
third=("example.com/page3")
fourth=("example.com/page4")
## I have seven URL's I want to open
urls_list=[first,second,third,fourth]
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# …Run Code Online (Sandbox Code Playgroud) 我想用 python 解析 robots.txt 文件。我已经探索了 robotsParser 和 robotsExclusionParser,但没有什么真正满足我的标准。我想一次性获取所有 diallowedUrls 和 allowedUrls ,而不是手动检查每个 url 是否允许。有没有图书馆可以做到这一点?