add*_*ons 32 html python tags beautifulsoup scraper
我想提取:
image标签的src的文本和div类标记内的锚标记的文本我成功地设法提取img src,但是无法从锚标记中提取文本.
<a class="title" href="http://www.amazon.com/Nikon-COOLPIX-Digital-Camera-NIKKOR/dp/B0073HSK0K/ref=sr_1_1?s=electronics&ie=UTF8&qid=1343628292&sr=1-1&keywords=digital+camera">Nikon COOLPIX L26 16.1 MP Digital Camera with 5x Zoom NIKKOR Glass Lens and 3-inch LCD (Red)</a>
Run Code Online (Sandbox Code Playgroud)
这是整个HTML页面的链接.
这是我的代码:
for div in soup.findAll('div', attrs={'class':'image'}):
print "\n"
for data in div.findNextSibling('div', attrs={'class':'data'}):
for a in data.findAll('a', attrs={'class':'title'}):
print a.text
for img in div.findAll('img'):
print img['src']
Run Code Online (Sandbox Code Playgroud)
我想要做的是提取图像src(链接)和里面的标题div class=data,例如:
<a class="title" href="http://www.amazon.com/Nikon-COOLPIX-Digital-Camera-NIKKOR/dp/B0073HSK0K/ref=sr_1_1?s=electronics&ie=UTF8&qid=1343628292&sr=1-1&keywords=digital+camera">Nikon COOLPIX L26 16.1 MP Digital Camera with 5x Zoom NIKKOR Glass Lens and 3-inch LCD (Red)</a>
Run Code Online (Sandbox Code Playgroud)
应提取:
Nikon COOLPIX L26 16.1 MP Digital Camera with 5x Zoom NIKKOR Glass Lens and 3-inch LCD (Red)
gau*_*den 48
这将有助于:
from bs4 import BeautifulSoup
data = '''<div class="image">
<a href="http://www.example.com/eg1">Content1<img
src="http://image.example.com/img1.jpg" /></a>
</div>
<div class="image">
<a href="http://www.example.com/eg2">Content2<img
src="http://image.example.com/img2.jpg" /> </a>
</div>'''
soup = BeautifulSoup(data)
for div in soup.findAll('div', attrs={'class':'image'}):
print(div.find('a')['href'])
print(div.find('a').contents[0])
print(div.find('img')['src'])
Run Code Online (Sandbox Code Playgroud)
如果您正在研究亚马逊产品,那么您应该使用官方API.至少有一个Python软件包可以减轻您的抓取问题并使您的活动保持在使用条款范围内.
Pon*_*ios 19
就我而言,它的工作原理如下:
from BeautifulSoup import BeautifulSoup as bs
url="http://blabla.com"
soup = bs(urllib.urlopen(url))
for link in soup.findAll('a'):
print link.string
Run Code Online (Sandbox Code Playgroud)
希望能帮助到你!
上面所有的答案确实帮助我构建了我的答案,因此我投票支持其他用户提出的所有答案:但我最终对我正在处理的确切问题整理了自己的答案:
由于问题明确定义,我必须访问 dom 结构中的一些兄弟姐妹及其子代:此解决方案将迭代 dom 结构中的图像,并使用产品标题构造图像名称,并将图像保存到本地目录。
import urlparse
from urllib2 import urlopen
from urllib import urlretrieve
from BeautifulSoup import BeautifulSoup as bs
import requests
def getImages(url):
#Download the images
r = requests.get(url)
html = r.text
soup = bs(html)
output_folder = '~/amazon'
#extracting the images that in div(s)
for div in soup.findAll('div', attrs={'class':'image'}):
modified_file_name = None
try:
#getting the data div using findNext
nextDiv = div.findNext('div', attrs={'class':'data'})
#use findNext again on previous object to get to the anchor tag
fileName = nextDiv.findNext('a').text
modified_file_name = fileName.replace(' ','-') + '.jpg'
except TypeError:
print 'skip'
imageUrl = div.find('img')['src']
outputPath = os.path.join(output_folder, modified_file_name)
urlretrieve(imageUrl, outputPath)
if __name__=='__main__':
url = r'http://www.amazon.com/s/ref=sr_pg_1?rh=n%3A172282%2Ck%3Adigital+camera&keywords=digital+camera&ie=UTF8&qid=1343600585'
getImages(url)
Run Code Online (Sandbox Code Playgroud)
我建议使用lxml路由并使用xpath.
from lxml import etree
# data is the variable containing the html
data = etree.HTML(data)
anchor = data.xpath('//a[@class="title"]/text()')
Run Code Online (Sandbox Code Playgroud)