文档说我可以:
lxml可以从本地文件,HTTP URL或FTP URL进行解析.它还可以自动检测和读取gzip压缩的XML文件(.gz).
(摘自"Parsers"下的http://lxml.de/parsing.html)
但一个快速的实验似乎暗示:
Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:45:13) [MSC v.1600 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from lxml import etree
>>> parser = etree.HTMLParser()
>>> from urllib.request import urlopen
>>> with urlopen('https://pypi.python.org/simple') as f:
... tree = etree.parse(f, parser)
...
>>> tree2 = etree.parse('https://pypi.python.org/simple', parser)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lxml.etree.pyx", line 3299, in lxml.etree.parse (src\lxml\lxml.etree.c:72655)
File "parser.pxi", line 1791, in lxml.etree._parseDocument (src\lxml\lxml.etree.c:106263)
File "parser.pxi", line 1817, in lxml.etree._parseDocumentFromURL (src\lxml\lxml.etree.c:106564)
File "parser.pxi", line 1721, in lxml.etree._parseDocFromFile (src\lxml\lxml.etree.c:105561)
File "parser.pxi", line 1122, in lxml.etree._BaseParser._parseDocFromFile (src\lxml\lxml.etree.c:100456)
File "parser.pxi", line 580, in lxml.etree._ParserContext._handleParseResultDoc (src\lxml\lxml.etree.c:94543)
File "parser.pxi", line 690, in lxml.etree._handleParseResult (src\lxml\lxml.etree.c:96003)
File "parser.pxi", line 618, in lxml.etree._raiseParseError (src\lxml\lxml.etree.c:95015)
OSError: Error reading file 'https://pypi.python.org/simple': failed to load external entity "https://pypi.python.org/simple"
>>>
Run Code Online (Sandbox Code Playgroud)
我可以使用urlopen方法,但文档似乎暗示传递URL在某种程度上更好.另外,如果文档不准确,我有点担心依赖lxml,特别是如果我开始需要做更复杂的事情.
从已知的URL解析带有lxml的HTML的正确方法是什么?我应该在哪里看到有记录的?
更新:如果我使用的是http
URL而不是一个URL,则会出现相同的错误https
.
Pau*_*ore 10
问题是lxml不支持HTTPS URL,而http://pypi.python.org/simple重定向到HTTPS版本.
因此,对于任何安全的网站,您需要自己阅读URL:
from lxml import etree
from urllib.request import urlopen
parser = etree.HTMLParser()
with urlopen('https://pypi.python.org/simple') as f:
tree = etree.parse(f, parser)
Run Code Online (Sandbox Code Playgroud)