如何使用Python和lxml来解析本地html文件?

rde*_*00b 11 python python-2.7

我在python中使用本地html文件,我正在尝试使用lxml来解析文件.由于某种原因,我无法正确加载文件,我不确定这是否与我的本地计算机上没有设置http服务器,etree用法或其他内容有关.

我对此代码的参考是:http: //docs.python-guide.org/en/latest/scenarios/scrape/

这可能是一个相关的问题: 请求:没有找到连接适配器,Python3中的错误

这是我的代码:

from lxml import html
import requests

page = requests.get('C:\Users\...\sites\site_1.html')
tree = html.fromstring(page.text)

test = tree.xpath('//html/body/form/div[3]/div[3]/div[2]/div[2]/div/div[2]/div[2]/p[1]/strong/text()')

print test
Run Code Online (Sandbox Code Playgroud)

我得到的回溯读取:

C:\Python27\python.exe "C:/Users/.../extract_html/extract.py"
Traceback (most recent call last):
  File "C:/Users/.../extract_html/extract.py", line 4, in <module>
    page = requests.get('C:\Users\...\sites\site_1.html')
  File "C:\Python27\lib\site-packages\requests\api.py", line 69, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Python27\lib\site-packages\requests\api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "C:\Python27\lib\site-packages\requests\sessions.py", line 465, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Python27\lib\site-packages\requests\sessions.py", line 567, in send
    adapter = self.get_adapter(url=request.url)
  File "C:\Python27\lib\site-packages\requests\sessions.py", line 641, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'C:\Users\...\sites\site_1.html'

Process finished with exit code 1
Run Code Online (Sandbox Code Playgroud)

您可以看到它与"连接适配器"有关,但我不确定这意味着什么.

Bry*_*ley 23

如果文件是本地文件,则不应使用requests- 只需打开文件并将其读入.requests期望与Web服务器通信.

with open(r'C:\Users\...site_1.html', "r") as f:
    page = f.read()
tree = html.fromstring(page)
Run Code Online (Sandbox Code Playgroud)


小智 10

有一个更好的方法:使用parse函数而不是fromstring

tree = html.parse("C:\Users\...site_1.html")
print(html.tostring(tree))
Run Code Online (Sandbox Code Playgroud)

  • 不要忘记先进行导入:`from lxml import html` (2认同)

小智 5

您也可以尝试使用美丽汤

from bs4 import BeautifulSoup
f = open("filepath", encoding="utf8")     
soup = BeautifulSoup(f)
f.close()
Run Code Online (Sandbox Code Playgroud)