tor*_*ger 5 python lxml screen-scraping hyperlink extraction
我有这个xpath查询:
/html/body//tbody/tr[*]/td[*]/a[@title]/@href
Run Code Online (Sandbox Code Playgroud)
它使用title属性提取所有链接 - 并href在FireFox的Xpath检查程序附加组件中提供.
但是,我似乎无法使用它lxml.
from lxml import etree
parsedPage = etree.HTML(page) # Create parse tree from valid page.
# Xpath query
hyperlinks = parsedPage.xpath("/html/body//tbody/tr[*]/td[*]/a[@title]/@href")
for x in hyperlinks:
print x # Print links in <a> tags, containing the title attribute
Run Code Online (Sandbox Code Playgroud)
这不会产生lxml(空列表)的结果.
如何在Python下抓取href包含属性标题的超链接的文本(链接)lxml?
jkp*_*jkp 10
我能够使用以下代码:
from lxml import html, etree
from StringIO import StringIO
html_string = '''<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html lang="en">
<head/>
<body>
<table border="1">
<tbody>
<tr>
<td><a href="http://stackoverflow.com/foobar" title="Foobar">A link</a></td>
</tr>
<tr>
<td><a href="http://stackoverflow.com/baz" title="Baz">Another link</a></td>
</tr>
</tbody>
</table>
</body>
</html>'''
tree = etree.parse(StringIO(html_string))
print tree.xpath('/html/body//tbody/tr/td/a[@title]/@href')
>>> ['http://stackoverflow.com/foobar', 'http://stackoverflow.com/baz']
Run Code Online (Sandbox Code Playgroud)