试过Python BeautifulSoup和Phantom JS:STILL无法抓取网站

JJT*_*ler 3 javascript python beautifulsoup web-scraping phantomjs

在过去的几周里,你可能已经看到了我绝望的挫败感.我一直在抓一些等待时间数据,但我仍然无法从这两个站点获取数据

http://www.centura.org/erwait

http://hcavirginia.com/home/

起初我尝试使用BS4 for Python.HCA Virgina的示例代码如下

from BeautifulSoup import BeautifulSoup
import requests

url = 'http://hcavirginia.com/home/'
r = requests.get(url)

soup = BeautifulSoup(r.text)
wait_times = [span.text for span in soup.findAll('span', attrs={'class': 'ehc-er-digits'})]

fd = open('HCA_Virginia.csv', 'a')

for w in wait_times:
    fd.write(w + '\n')

fd.close()
Run Code Online (Sandbox Code Playgroud)

所有这些都是打印空白到控制台或CSV.所以我尝试使用PhantomJS,因为有人告诉我它可能正在加载JS.然而,同样的结果!打印空白到控制台或CSV.示例代码如下.

var page = require('webpage').create(),
url = 'http://hcavirginia.com/home/';

page.open(url, function(status) {
if (status !== "success") {
    console.log("Can't access network");
} else {
    var result = page.evaluate(function() {

        var list = document.querySelectorAll('span.ehc-er-digits'), time = [], i;
        for (i = 0; i < list.length; i++) {
            time.push(list[i].innerText);
        }
        return time;

    });
    console.log (result.join('\n'));
    var fs = require('fs');
    try 
    {                   
        fs.write("HCA_Virginia.csv", '\n' + result.join('\n'), 'a');
    } 
    catch(e) 
    {
        console.log(e); 
    } 
}

phantom.exit();
});
Run Code Online (Sandbox Code Playgroud)

与Centura Health相同的问题:(

我究竟做错了什么?

Ste*_*ima 12

您面临的问题是元素是由JS创建的,可能需要一些时间才能加载它们.您需要一个处理JS的scraper,并且可以等到创建所需的元素.

你可以使用PyQt4.从webscraping.com和像BeautifulSoup这样的HTML解析器调整此配方,这非常简单:

(写完之后,我找到了python 的webscraping库.可能值得一看)

import sys
from bs4 import BeautifulSoup
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import * 

class Render(QWebPage):
    def __init__(self, url):
        self.app = QApplication(sys.argv)
        QWebPage.__init__(self)
        self.loadFinished.connect(self._loadFinished)
        self.mainFrame().load(QUrl(url))
        self.app.exec_()

    def _loadFinished(self, result):
        self.frame = self.mainFrame()
        self.app.quit()   

url = 'http://hcavirginia.com/home/'
r = Render(url)
soup = BeautifulSoup(unicode(r.frame.toHtml()))
# In Python 3.x, don't unicode the output from .toHtml(): 
#soup = BeautifulSoup(r.frame.toHtml()) 
nums = [int(span) for span in soup.find_all('span', class_='ehc-er-digits')]
print nums
Run Code Online (Sandbox Code Playgroud)

输出:

[21, 23, 47, 11, 10, 8, 68, 56, 19, 15, 7]
Run Code Online (Sandbox Code Playgroud)

这是我原来的答案,使用ghost.py:

我设法使用ghost.py一起为你破解一些东西.(在Python 2.7上测试,ghost.py 0.1b3和PyQt4-4 32位).我不建议在生产代码中使用它!

from ghost import Ghost
from time import sleep

ghost = Ghost(wait_timeout=50, download_images=False)
page, extra_resources = ghost.open('http://hcavirginia.com/home/',
                                   headers={'User-Agent': 'Mozilla/4.0'})

# Halt execution of the script until a span.ehc-er-digits is found in 
# the document
page, resources = ghost.wait_for_selector("span.ehc-er-digits")

# It should be possible to simply evaluate
# "document.getElementsByClassName('ehc-er-digits');" and extract the data from
# the returned dictionary, but I didn't quite understand the
# data structure - hence this inline javascript.
nums, resources = ghost.evaluate(
    """
    elems = document.getElementsByClassName('ehc-er-digits');
    nums = []
    for (i = 0; i < elems.length; ++i) {
        nums[i] = elems[i].innerHTML;
    }
    nums;
    """)

wt_data = [int(x) for x in nums]
print wt_data
sleep(30) # Sleep a while to avoid the crashing of the script. Weird issue!
Run Code Online (Sandbox Code Playgroud)

一些评论:

  • 从我的评论中可以看出,我并没有完全弄清楚返回的dict的结构Ghost.evaluate(document.getElementsByClassName('ehc-er-digits');)- 尽管可能使用这样的查询找到所需的信息.

  • 我也遇到了一些问题,最后脚本崩溃了.睡了30秒就解决了这个问题.