Col*_*rld 3 urllib2 request web-scraping python-3.x
我试图通过关键字刮取这个xml页面的链接,但urllib2给我带来的错误,我无法解决python3 ...
from bs4 import BeautifulSoup
import requests
import smtplib
import urllib2
from lxml import etree
url = 'https://store.fabspy.com/sitemap_products_1.xml?from=5619742598&to=9172987078'
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
proxies = {'https': '209.212.253.44'}
req = urllib2.Request(url, headers=hdr, proxies=proxies)
try:
page = urllib2.urlopen(req)
except urllib2.HTTPError as e:
print(e.fp.read())
content = page.read()
def parse(self, response):
try:
print(response.status)
print('???????????????????????????????????')
if response.status == 200:
self.driver.implicitly_wait(5)
self.driver.get(response.url)
print(response.url)
print('!!!!!!!!!!!!!!!!!!!!')
# DO STUFF
except httplib.BadStatusLine:
pass
while True:
soup = BeautifulSoup(a.context, 'lxml')
links = soup.find_all('loc')
for link in links:
if 'notonesite' and 'winter' in link.text:
print(link.text)
jake = link.text
Run Code Online (Sandbox Code Playgroud)
我只是试图通过代理发送urllib请求,看看链接是否在站点地图上...
urllib2在Python3中不可用.你应该使用urllib.error和urllib.request:
import urllib.request
import urllib.error
...
req = (url, headers=hdr) # doesn't take a proxies argument though...
...
try:
page = urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
...
Run Code Online (Sandbox Code Playgroud)
...等等.但请注意,urllib.request.Request()这不会引起proxies争论.有关代理处理,请参阅文档.
| 归档时间: |
|
| 查看次数: |
13062 次 |
| 最近记录: |