And*_*ung 3 python selenium beautifulsoup web-scraping selenium-webdriver
因此,我希望遍历 URL 数组并打开不同的 URL 以使用 Selenium 进行网页抓取。问题是,一旦我点击第二个 browser.get(url),我就会收到“URL 超出最大重试次数”和“无法建立连接,因为目标计算机主动拒绝它”。
编辑:添加了其余的代码,尽管它只是 BeautifulSoup 的东西。
from bs4 import BeautifulSoup
import time
from selenium import webdriver
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options
import json
chrome_options = Options()
chromedriver = webdriver.Chrome(executable_path='C:/Users/andre/Downloads/chromedriver_win32/chromedriver.exe', options=chrome_options)
urlArr = ['https://link1', 'https://link2', '...']
for url in urlArr:
with chromedriver as browser:
browser.get(url)
time.sleep(5)
# Click a button
chromedriver.find_elements_by_tag_name('a')[7].click()
chromedriver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2)
for i in range (0, 2):
chromedriver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)
html = browser.page_source
page_soup = BeautifulSoup(html, 'html.parser')
boxes = page_soup.find("div", {"class": "rpBJOHq2PR60pnwJlUyP0"})
videos = page_soup.findAll("video", {"class": "_1EQJpXY7ExS04odI1YBBlj"})
Run Code Online (Sandbox Code Playgroud)
这里的其他帖子说,当您一次使用太多页面并且服务器将我拒之门外时,就会发生这种情况,但这不是我的问题。每当我多次调用 browser.get(url) 时,就会发生上述错误。
这是怎么回事?谢谢。
解决了问题。您必须再次重新创建网络驱动程序。
from bs4 import BeautifulSoup
import time
from selenium import webdriver
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options
import json
urlArr = ['https://link1', 'https://link2', '...']
for url in urlArr:
chrome_options = Options()
chromedriver = webdriver.Chrome(executable_path='C:/Users/andre/Downloads/chromedriver_win32/chromedriver.exe', options=chrome_options)
with chromedriver as browser:
browser.get(url)
time.sleep(5)
# Click a button
chromedriver.find_elements_by_tag_name('a')[7].click()
chromedriver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2)
for i in range (0, 2):
chromedriver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(5)
html = browser.page_source
page_soup = BeautifulSoup(html, 'html.parser')
boxes = page_soup.find("div", {"class": "rpBJOHq2PR60pnwJlUyP0"})
videos = page_soup.findAll("video", {"class": "_1EQJpXY7ExS04odI1YBBlj"})
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
10934 次 |
最近记录: |