Dee*_*ngh 6 python beautifulsoup web-scraping python-requests
我编写这段代码是为了废弃这个特定页面,但它不断给出
错误“requests.exceptions.SSLError:HTTPSConnectionPool(主机='rcms.assam.gov.in',端口=443):超过最大重试次数,网址:/Show_Reports.aspx?RID=86(由SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] 证书验证失败:无法获取本地颁发者证书 (_ssl.c:1129)')))"
import requests
from bs4 import BeautifulSoup as bs
url = "https://rcms.assam.gov.in/Show_Reports.aspx?RID=86"
page = requests.get(url)
soup = bs(page.text,"lxml")
Run Code Online (Sandbox Code Playgroud)
Ctr*_*rlZ 10
您可以这样做,风险自负:
page = requests.get(url, verify=False)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
43641 次 |
| 最近记录: |