Kri*_*673 0 html python beautifulsoup web-crawler web-scraping
我在文件中有一些网页链接article_links.txt,我想逐个打开,提取文本,然后打印出来.我的代码是:
import requests
from inscriptis import get_text
from bs4 import BeautifulSoup
links = open(r'C:\Users\h473\Documents\Crawling\article_links.txt', "r")
for a in links:
print(a)
page = requests.get(a)
soup = BeautifulSoup(page.text, 'lxml')
html = soup.find(class_='article-wrap')
if html==None:
html = soup.find(class_='mag-article-wrap')
text = get_text(html.text)
print(text)
Run Code Online (Sandbox Code Playgroud)
但我得到一个错误说, ---> text = get_text(html.text)
AttributeError: 'NoneType' object has no attribute 'text'
所以,当我打印出soup变量以查看ts内容是什么时.这是我为每个链接找到的内容:
http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head><title>Bad Request</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/></head>
<body><h2>Bad Request - Invalid URL</h2>
<hr/><p>HTTP Error 400. The request URL is invalid.</p>
</body></html>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html><head><title>Bad Request</title>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/></head>
<body><h2>Bad Request - Invalid URL</h2>
<hr/><p>HTTP Error 400. The request URL is invalid.</p>
</body></html>
Run Code Online (Sandbox Code Playgroud)
所以,我尝试从链接中单独提取文本,如下所示:
import requests
from inscriptis import get_text
from bs4 import BeautifulSoup
page = requests.get('http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law')
soup = BeautifulSoup(page.text, 'lxml')
html = soup.find(class_='article-wrap')
if html==None:
html = soup.find(class_='mag-article-wrap')
text = get_text(html.text)
print(text)
Run Code Online (Sandbox Code Playgroud)
而且效果很好!所以,我尝试以列表/数组形式提供链接,并尝试从每个链接中提取文本:
import requests
from inscriptis import get_text
from bs4 import BeautifulSoup
links = ['http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law',
'http://www3.asiainsurancereview.com//Mock-News-Article/id/42946/Type/eDaily/India-M-A-deals-brewing-in-insurance-sector',
'http://www3.asiainsurancereview.com//Mock-News-Article/id/42947/Type/eDaily/China-Online-insurance-premiums-soar-31-in-1Q2018',
'http://www3.asiainsurancereview.com//Mock-News-Article/id/42948/Type/eDaily/South-Korea-Courts-increasingly-see-65-as-retirement-age',
'http://www3.asiainsurancereview.com//Magazine/ReadMagazineArticle/aid/40847/Creating-a-growth-environment-for-health-insurance-in-Asia']
#open(r'C:\Users\h473\Documents\Crawling\article_links.txt', "r")
for a in links:
print(a)
page = requests.get(a)
soup = BeautifulSoup(page.text, 'lxml')
html = soup.find(class_='article-wrap')
if html==None:
html = soup.find(class_='mag-article-wrap')
text = get_text(html.text)
print(text)
Run Code Online (Sandbox Code Playgroud)
这也很完美!那么,从文本文件中提取链接会出现什么问题?以及如何解决它?
问题是您的网址无效,因为它们都以换行符结尾.你可以看到像这样的东西:
>>> page = requests.get('http://www3.asiainsurancereview.com//Mock-News-Article/id/42945/Type/eDaily/New-Zealand-Govt-starts-public-consultation-phase-of-review-of-insurance-law\n')
>>> page
<Response [400]>
>>> page.text
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid URL</h2>
<hr><p>HTTP Error 400. The request URL is invalid.</p>
</BODY></HTML>
Run Code Online (Sandbox Code Playgroud)
BeautifulSoup正在解析HTML就好了.它只是不是非常有用的HTML.而且,特别是它没有任何类article-wrap或类mag-article-wrap,所以你的find回报None.并且您没有针对该案例进行任何错误处理; 您只是尝试使用该None值,就好像它是一个HTML元素,因此是例外.
您应该已经注意到每个打印出来a:每行后面都有一个额外的空白行.这或者意味着字符串中有换行符(这实际上是正在发生的),或者实际行之间有空行(这将是一个更无效的URL - 你会得到它的一个ConnectionError或一些子类) .
你想要做的很简单:只需从每一行中删除换行符:
for a in links:
a = a.rstrip()
# rest of your code
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
199 次 |
| 最近记录: |