使用python解析从Javascript呈现的网页中抓取的数据

Jus*_*fit 0 python web-scraping

我正在尝试使用 .find off of a soup 变量,但是当我访问网页并尝试找到正确的类时,它不返回任何内容。

from bs4 import *
import time
import pandas as pd
import pickle
import html5lib
from requests_html import HTMLSession

s = HTMLSession()
url = "https://cryptoli.st/lists/fixed-supply"


def get_data(url):
    r = s.get(url)
    global soup
    soup = BeautifulSoup(r.text, 'html.parser')
    return soup

def get_next_page(soup):
    page = soup.find('div', {'class': 'dataTables_paginate paging_simple_numbers'})
    return page
    
get_data(url)
print(get_next_page(soup))
Run Code Online (Sandbox Code Playgroud)

“页面”变量返回“无”,即使我从网站元素检查器中提取它。我怀疑这与网站是用 javascript 呈现的事实有关,但不知道为什么。如果我拿走 {'class' : ''datatables_paginate paging_simple_numbers'} 并尝试找到 'div' 然后它会工作并返回第一个 div 标签,所以我不知道还能做什么。

小智 5

所以你想抓取动态页面内容,你可以用 selenium webdriver 使用美丽的汤。这个答案基于这里的解释https://www.geeksforgeeks.org/scrape-content-from-dynamic-websites/

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

url = "https://cryptoli.st/lists/fixed-supply"
  
driver = webdriver.Chrome('./chromedriver') 
driver.get(url) 
  
# this is just to ensure that the page is loaded
time.sleep(5) 
  
html = driver.page_source
  
# this renders the JS code and stores all
# of the information in static HTML code.
  
# Now, we could simply apply bs4 to html variable
soup = BeautifulSoup(html, "html.parser")
Run Code Online (Sandbox Code Playgroud)