Mus*_*ger 2 html python string parsing beautifulsoup
我正在尝试使用以下代码的beautifulsoup从网站访问文章内容:
site= 'www.example.com'
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)
content = soup.find_all('p')
content=str(content)
Run Code Online (Sandbox Code Playgroud)
内容对象包含页面中"p"标记内的所有主要文本,但是输出中仍然存在其他标记,如下图所示.我想删除匹配的<>标签对和标签本身所包含的所有字符.这样只留下文字.
我尝试了以下方法,但它似乎不起作用.
' '.join(item for item in content.split() if not (item.startswith('<') and item.endswith('>')))
Run Code Online (Sandbox Code Playgroud)
在sting中删除子串的最佳方法是什么?以某种模式开始和结束,例如<>
Ani*_*non 12
使用regEx:
re.sub('<[^<]+?>', '', text)
Run Code Online (Sandbox Code Playgroud)
使用BeautifulSoup :(来自这里的解决方案)
import urllib
from bs4 import BeautifulSoup
url = "http://news.bbc.co.uk/2/hi/health/2284783.stm"
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)
# kill all script and style elements
for script in soup(["script", "style"]):
script.extract() # rip it out
# get text
text = soup.get_text()
# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)
print(text)
Run Code Online (Sandbox Code Playgroud)
使用NLTK:
import nltk
from urllib import urlopen
url = "https://stackoverflow.com/questions/tagged/python"
html = urlopen(url).read()
raw = nltk.clean_html(html)
print(raw)
Run Code Online (Sandbox Code Playgroud)
你可以用 get_text()
for i in content:
print i.get_text()
Run Code Online (Sandbox Code Playgroud)
以下示例来自文档:
>>> markup = '<a href="http://example.com/">\nI linked to <i>example.com</i>\n</a>'
>>> soup = BeautifulSoup(markup)
>>> soup.get_text()
u'\nI linked to example.com\n'
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
22149 次 |
| 最近记录: |