我试图通过Python中的代理服务器访问Web.我正在使用请求库,我遇到了验证我的代理的问题,因为我使用的代理需要密码.
proxyDict = {
'http' : 'username:mypassword@77.75.105.165',
'https' : 'username:mypassword@77.75.105.165'
}
r = requests.get("http://www.google.com", proxies=proxyDict)
Run Code Online (Sandbox Code Playgroud)
我收到以下错误:
Traceback (most recent call last):
File "<pyshell#13>", line 1, in <module>
r = requests.get("http://www.google.com", proxies=proxyDict)
File "C:\Python27\lib\site-packages\requests\api.py", line 78, in get
:param url: URL for the new :class:`Request` object.
File "C:\Python27\lib\site-packages\requests\api.py", line 65, in request
"""Sends a POST request. Returns :class:`Response` object.
File "C:\Python27\lib\site-packages\requests\sessions.py", line 187, in request
def head(self, url, **kwargs):
File "C:\Python27\lib\site-packages\requests\models.py", line 407, in send
"""
File "C:\Python27\lib\site-packages\requests\packages\urllib3\poolmanager.py", line 127, in …
Run Code Online (Sandbox Code Playgroud) 当我在一堆URL上运行循环以查找这些页面上的所有链接(在某些Div中)时,我得到了这个错误:
Traceback (most recent call last):
File "file_location", line 38, in <module>
out.writerow(tag['href'])
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2026' in position 0: ordinal not in range(128)
Run Code Online (Sandbox Code Playgroud)
我写的与此错误相关的代码是:
out = csv.writer(open("file_location", "ab"), delimiter=";")
for tag in soup_3.findAll('a', href=True):
out.writerow(tag['href'])
Run Code Online (Sandbox Code Playgroud)
有没有办法解决这个问题,可能使用if语句来忽略任何有Unicode错误的URL?
在此先感谢您的帮助.
我正在尝试创建一个包含 URL 列表的 CSV 文件。
我对编程很陌生,所以请原谅任何草率的代码。
我有一个循环,它遍历位置列表以获取 URL 列表。
然后我在该循环中有一个循环,将数据导出到 CSV 文件。
import urllib, csv, re
from BeautifulSoup import BeautifulSoup
list_of_URLs = csv.reader(open("file_location_for_URLs_to_parse"))
for row in list_of_URLs:
row_string = "".join(row)
file = urllib.urlopen(row_string)
page_HTML = file.read()
soup = BeautifulSoup(page_HTML) # parsing HTML
Thumbnail_image = soup.findAll("div", {"class": "remositorythumbnail"})
Thumbnail_image_string = str(Thumbnail_image)
soup_3 = BeautifulSoup(Thumbnail_image_string)
Thumbnail_image_URL = soup_3.findAll('a', attrs={'href': re.compile("^http://")})
Run Code Online (Sandbox Code Playgroud)
这是对我不起作用的部分:
out = csv.writer(open("file_location", "wb"), delimiter=";")
for tag in soup_3.findAll('a', href=True):
out.writerow(tag['href'])
Run Code Online (Sandbox Code Playgroud)
基本上作者一直在写自己,有没有办法跳到 CSV 上的第一个空行下方并开始写作?