将硒响应网址传递给scrapy

Ony*_*Lam 6 python selenium scrapy

我正在学习Python,并试图在下拉菜单中抓取此页面以获取特定值.之后,我需要单击结果表上的每个项目以检索特定信息.我能够选择项目并检索webdriver上的信息.但我不知道如何将响应URL传递给crawlspider.

driver = webdriver.Firefox()
driver.get('http://www.cppcc.gov.cn/CMS/icms/project1/cppcc/wylibary/wjWeiYuanList.jsp')
more_btn = WebDriverWait(driver, 20).until(
     EC.visibility_of_element_located((By.ID, '_button_select'))
            )  
more_btn.click()

## select specific value from the dropdown
driver.find_element_by_css_selector("select#tabJcwyxt_jiebie >     option[value='teyaoxgrs']").click()
driver.find_element_by_css_selector("select#tabJcwyxt_jieci > option[value='d11jie']").click()
search2 = driver.find_element_by_class_name('input_a2')
search2.click()
time.sleep(5)

## convert html to "nice format"
text_html=driver.page_source.encode('utf-8')
html_str=str(text_html)

## this is a hack that initiates a "TextResponse" object (taken from the Scrapy module)
resp_for_scrapy=TextResponse('none',200,{},html_str,[],None)

## convert html to "nice format"
text_html=driver.page_source.encode('utf-8')
html_str=str(text_html)

resp_for_scrapy=TextResponse('none',200,{},html_str,[],None)
Run Code Online (Sandbox Code Playgroud)

所以这就是我被困住的地方.我能够使用上面的代码进行查询.但是如何将resp_for_scrapy传递给crawlspider?我把resp_for_scrapy替换为项目但是没有用.

## spider 
class ProfileSpider(CrawlSpider):
name = 'pccprofile2'
allowed_domains = ['cppcc.gov.cn']
start_urls = ['http://www.cppcc.gov.cn/CMS/icms/project1/cppcc/wylibary/wjWeiYuanList.jsp']    

def parse(self, resp_for_scrapy):

    hxs = HtmlXPathSelector(resp_for_scrapy)
    for post in resp_for_scrapy.xpath('//div[@class="table"]//ul//li'):
        items = []
        item = Ppcprofile2Item()
        item ["name"] = hxs.select("//h1/text()").extract()
        item ["title"] = hxs.select("//div[@id='contentbody']//tr//td//text()").extract()
        items.append(item)

    ##click next page      
    while True:
        next = self.driver.findElement(By.linkText("???"))
        try:
            next.click()
        except:
            break

    return(items)
Run Code Online (Sandbox Code Playgroud)

任何建议将不胜感激!!!!

编辑我包括一个中间件类,可以从蜘蛛类之前的下拉列表中进行选择.但现在没有错误也没有结果.

class JSMiddleware(object):
    def process_request(self, request, spider):
        driver = webdriver.PhantomJS()
         driver.get('http://www.cppcc.gov.cn/CMS/icms/project1/cppcc/wylibary/wjWeiYuanList.jsp')


    # select from the dropdown
        more_btn = WebDriverWait(driver, 20).until(
        EC.visibility_of_element_located((By.ID, '_button_select'))
                )
        more_btn.click()


        driver.find_element_by_css_selector("select#tabJcwyxt_jiebie > option[value='teyaoxgrs']").click()
        driver.find_element_by_css_selector("select#tabJcwyxt_jieci > option[value='d11jie']").click()
        search2 = driver.find_element_by_class_name('input_a2')
        search2.click()
        time.sleep(5)

        #get the response 
        body = driver.page_source
        return HtmlResponse(driver.current_url, body=body, encoding='utf-8', request=request)



class ProfileSpider(CrawlSpider):
    name = 'pccprofile2'
    rules = [Rule(SgmlLinkExtractor(allow=(),restrict_xpaths=("//div[@class='table']")), callback='parse_item')]  

    def parse_item(self, response):
    hxs = HtmlXPathSelector(response)
    items = []
    item = Ppcprofile2Item()
    item ["name"] = hxs.select("//h1/text()").extract()
    item ["title"] = hxs.select("//div[@id='contentbody']//tr//td//text()").extract()
    items.append(item)

    #click next page      
    while True:
        next = response.findElement(By.linkText("???"))
        try:
            next.click()
        except:
            break

    return(items)
Run Code Online (Sandbox Code Playgroud)

Joe*_*nux 20

在使用Scrapy定期处理页面之前,使用下载器中间件来捕获需要硒的页面:

下载器中间件是Scrapy的请求/响应处理的钩子框架.它是一个轻量级的低级系统,用于全局改变Scrapy的请求和响应.

这是使用PhantomJS的一个非常基本的例子:

from scrapy.http import HtmlResponse
from selenium import webdriver

class JSMiddleware(object):
    def process_request(self, request, spider):
        driver = webdriver.PhantomJS()
        driver.get(request.url)

        body = driver.page_source
        return HtmlResponse(driver.current_url, body=body, encoding='utf-8', request=request)
Run Code Online (Sandbox Code Playgroud)

一旦你返回HtmlResponse(或者TextResponse如果那是你真正想要的),Scrapy将停止处理下载程序并放入蜘蛛的parse方法:

如果它返回一个Response对象,Scrapy将不会打扰调用任何其他process_request()或process_exception()方法或相应的下载函数; 它会回复那个回应.始终在每个响应上调用已安装的中间件的process_response()方法.

在这种情况下,您可以parse像通常使用HTML一样继续使用spider的方法,除了页面上的JS已经执行.

提示:由于Downloader Middleware的process_request方法接受蜘蛛作为参数,你可以在spider中添加一个条件来检查你是否需要处理JS,这将让你处理完全相同的JS和非JS页面蜘蛛类.


Lev*_*von 7

这是Scrapy和Selenium的中间件

from scrapy.http import HtmlResponse
from scrapy.utils.python import to_bytes
from selenium import webdriver
from scrapy import signals


class SeleniumMiddleware(object):

    @classmethod
    def from_crawler(cls, crawler):
        middleware = cls()
        crawler.signals.connect(middleware.spider_opened, signals.spider_opened)
        crawler.signals.connect(middleware.spider_closed, signals.spider_closed)
        return middleware

    def process_request(self, request, spider):
        request.meta['driver'] = self.driver  # to access driver from response
        self.driver.get(request.url)
        body = to_bytes(self.driver.page_source)  # body must be of type bytes 
        return HtmlResponse(self.driver.current_url, body=body, encoding='utf-8', request=request)

    def spider_opened(self, spider):
        self.driver = webdriver.Firefox()

    def spider_closed(self, spider):
        self.driver.close()
Run Code Online (Sandbox Code Playgroud)

还需要加入 settings.py

DOWNLOADER_MIDDLEWARES = {
    'youproject.middlewares.selenium.SeleniumMiddleware': 200
}
Run Code Online (Sandbox Code Playgroud)

200根据文档确定天气或其他内容.

使用scrapy和selenium 更新firefox无头模式

如果你想在无头模式下运行firefox,那么安装xvfb

sudo apt-get install -y xvfb
Run Code Online (Sandbox Code Playgroud)

PyVirtualDisplay

sudo pip install pyvirtualdisplay
Run Code Online (Sandbox Code Playgroud)

并使用此中间件

from shutil import which

from pyvirtualdisplay import Display
from scrapy import signals
from scrapy.http import HtmlResponse
from scrapy.utils.project import get_project_settings
from selenium import webdriver
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary

settings = get_project_settings()

HEADLESS = True


class SeleniumMiddleware(object):

    @classmethod
    def from_crawler(cls, crawler):
        middleware = cls()
        crawler.signals.connect(middleware.spider_opened, signals.spider_opened)
        crawler.signals.connect(middleware.spider_closed, signals.spider_closed)
        return middleware

    def process_request(self, request, spider):
        self.driver.get(request.url)
        request.meta['driver'] = self.driver
        body = str.encode(self.driver.page_source)
        return HtmlResponse(self.driver.current_url, body=body, encoding='utf-8', request=request)

    def spider_opened(self, spider):
        if HEADLESS:
            self.display = Display(visible=0, size=(1280, 1024))
            self.display.start()
        binary = FirefoxBinary(settings.get('FIREFOX_EXE') or which('firefox'))
        self.driver = webdriver.Firefox(firefox_binary=binary)

    def spider_closed(self, spider):
        self.driver.close()
        if HEADLESS:
            self.display.stop()
Run Code Online (Sandbox Code Playgroud)

哪里settings.py包含

FIREFOX_EXE = '/path/to/firefox.exe'
Run Code Online (Sandbox Code Playgroud)

问题是某些版本的Firefox不能与selenium一起使用.要解决这个问题,你可以从这里下载firefox版本47.0.1(这个版本完美无缺),然后将其解压缩并放入你的selenium项目中.然后修改firefox路径为

FIREFOX_EXE = '/path/to/your/scrapyproject/firefox/firefox.exe'
Run Code Online (Sandbox Code Playgroud)