Jaw*_*han 5 python beautifulsoup web-scraping
嘿,我正在尝试抓取网站https://www.dawn.com/pakistan但 python find() find_all() 方法返回空列表,我已经尝试了 html5.parser、html5lib 和 lxml 仍然没有运气。我试图抓取的类存在于源代码和汤对象中,但事情似乎不起作用,任何帮助将不胜感激,谢谢!
代码:
from bs4 import BeautifulSoup
import lxml
import html5lib
import urllib.request
url1 = 'https://www.dawn.com/pakistan'
req = urllib.request.Request(
url1,
data=None,
headers=
{
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'
}
)
url1UrlContent=urllib.request.urlopen(req).read()
soup1=BeautifulSoup(url1UrlContent,'lxml')
url1Section1=soup1.find_all('h2', class_='story__title-size-five-text-black-
font--playfair-display')
print(url1Section1)
Run Code Online (Sandbox Code Playgroud)
我认为你不能像这样传递复合类名。我使用这些是复合类名。我使用 CSS 选择器作为更快的检索方法。复合词用“.”填充。
如果您在标题后面,则可以使用略有不同的选择器组合
import requests
from bs4 import BeautifulSoup
url= 'https://www.dawn.com/pakistan'
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
items = [item.text.strip() for item in soup.select('h2[data-layout=story] a')]
print(items)
Run Code Online (Sandbox Code Playgroud)
要仅限于左侧的内容,您可以使用:
items = [item.text.strip() for item in soup.select('.story__title.size-five.text-black.font--playfair-display a' )]
Run Code Online (Sandbox Code Playgroud)
更广泛地,
items = [item.text.strip() for item in soup.select('article [data-layout=story]')]
Run Code Online (Sandbox Code Playgroud)
根据您的评论:
items = [item.text.strip() for item in soup.select('.col-sm-6.col-12')]
Run Code Online (Sandbox Code Playgroud)