使用Tornado和Pika进行异步队列监控

dav*_*off 8 python asynchronous tornado amqp rabbitmq

我有一个AMQP服务器(RabbitMQ),我想在Tornado Web服务器上发布和读取.为此,我想我将使用异步amqp python库; 特别是Pika(据称支持龙卷风的变种).

我编写的代码似乎成功地从队列中读取,除了在请求结束时,我得到一个异常(浏览器返回正常):

[E 101219 01:07:35 web:868] Uncaught exception GET / (127.0.0.1)
    HTTPRequest(protocol='http', host='localhost:5000', method='GET', uri='/', version='HTTP/1.1', remote_ip='127.0.0.1', remote_ip='127.0.0.1', body='', headers={'Host': 'localhost:5000', 'Accept-Language': 'en-us,en;q=0.5', 'Accept-Encoding': 'gzip,deflate', 'Keep-Alive': '115', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'User-Agent': 'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13', 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'Connection': 'keep-alive', 'Cache-Control': 'max-age=0', 'If-None-Match': '"58f554b64ed24495235171596351069588d0260e"'})
    Traceback (most recent call last):
      File "/home/dave/devel/lib/python2.6/site-packages/tornado/web.py", line 810, in _stack_context
        yield
      File "/home/dave/devel/lib/python2.6/site-packages/tornado/stack_context.py", line 77, in StackContext
        yield
      File "/usr/lib/python2.6/contextlib.py", line 113, in nested
        yield vars
      File "/home/dave/lib/python2.6/site-packages/tornado/stack_context.py", line 126, in wrapped
        callback(*args, **kwargs)
      File "/home/dave/devel/src/pika/pika/tornado_adapter.py", line 42, in _handle_events
        self._handle_read()
      File "/home/dave/devel/src/pika/pika/tornado_adapter.py", line 66, in _handle_read
        self.on_data_available(chunk)
      File "/home/dave/devel/src/pika/pika/connection.py", line 521, in on_data_available
        self.channels[frame.channel_number].frame_handler(frame)
    KeyError: 1
Run Code Online (Sandbox Code Playgroud)

我不完全确定我正确使用这个库,所以我可能会做一些明显错误的事情.我的代码的基本流程是:

  1. 请求进来
  2. 使用TornadoConnection创建与RabbitMQ的连接; 指定回调
  3. 在连接回调中,创建一个通道,声明/绑定我的队列,并调用basic_consume; 指定回调
  4. 在消费回调中,关闭频道并调用Tornado的完成功能.
  5. 见例外.

我的问题是一些:

  1. 这种流程是否正确?我不确定连接回调的目的是什么,只是如果我不使用它就不起作用.
  2. 我应该为每个Web请求创建一个AMQP连接吗?RabbitMQ的文档表明,不,我不应该,而是我应该坚持创建只是渠道.但是我会在哪里创建连接,如果它短暂停止,我该如何尝试重新连接?
  3. 如果我每个Web请求创建一个AMQP连接,我应该在哪里关闭它?在我的回调中调用amqp.close()似乎搞砸了更多.

我会尝试稍后提供一些示例代码,但是我上面描述的步骤相当完整地列出了消费方面的内容.我也遇到了发布方面的问题,但是队列的消耗更加迫切.

jon*_*esy 8

这将有助于查看一些源代码,但我在多个生产项目中使用同样的龙卷风支持鼠兔模块而没有问题.

您不希望为每个请求创建连接.创建一个包装所有AMQP操作的类,并将其实例化为龙卷风应用程序级别的单例,可以跨请求(以及跨请求处理程序)使用.我在'runapp()'函数中执行此操作,该函数执行此类操作,然后启动主龙卷风ioloop.

这是一个名为'Events'的课程.这是部分实现(具体来说,我没有在这里定义'self.handle_event'.这取决于你.

class Event(object):
  def __init__(self, config):
    self.host = 'localhost'
    self.port = '5672'
    self.vhost = '/'
    self.user = 'foo'
    self.exchange = 'myx'
    self.queue = 'myq'
    self.recv_routing_key = 'msgs4me'
    self.passwd = 'bar'

    self.connected = False 
    self.connect()


  def connect(self):

    credentials = pika.PlainCredentials(self.user, self.passwd)

    parameters = pika.ConnectionParameters(host = self.host,
                                         port = self.port,
                                         virtual_host = self.vhost,
                                         credentials = credentials)

    srs = pika.connection.SimpleReconnectionStrategy()

    logging.debug('Events: Connecting to AMQP Broker: %s:%i' % (self.host,
                                                              self.port))
    self.connection = tornado_adapter.TornadoConnection(parameters,
                                                      wait_for_open = False,
                                                      reconnection_strategy = srs,
                                                      callback = self.on_connected)

  def on_connected(self):

    # Open the channel
    logging.debug("Events: Opening a channel")
    self.channel = self.connection.channel()

    # Declare our exchange
    logging.debug("Events: Declaring the %s exchange" %  self.exchange)
    self.channel.exchange_declare(exchange = self.exchange,
                                type = "fanout",
                                auto_delete = False,
                                durable = True)

    # Declare our queue for this process
    logging.debug("Events: Declaring the %s queue" %  self.queue)
    self.channel.queue_declare(queue = self.queue,
                             auto_delete = False,
                             exclusive = False,
                             durable = True)


    # Bind to the exchange
    self.channel.queue_bind(exchange = self.exchange,
                          queue = self.queue,
                          routing_key = self.recv_routing_key)

    self.channel.basic_consume(consumer = self.handle_event, queue = self.queue, no_ack = True)

    # We should be connected if we made it this far
    self.connected = True
Run Code Online (Sandbox Code Playgroud)

然后我把它放在一个名为'events.py'的文件中.我的RequestHandlers和任何后端代码都使用'common.py'模块来包装对两者都有用的代码(我的RequestHandler不直接调用任何amqp模块方法 - 对于db,cache等也是如此),所以我在common.py中的模块级别定义'events = None',我实例化Event对象有点像这样:

import events

def runapp(config):
    if myapp.common.events is None: 
       myapp.common.events = myapp.events.Event(config)
    logging.debug("MYAPP.COMMON.EVENTS: %s", myapp.common.events)
    http_server = tornado.httpserver.HTTPServer(app,
                                            xheaders=config['HTTPServer']['xheaders'],
                                            no_keep_alive=config['HTTPServer']['no_keep_alive'])
    http_server.listen(port) 
    main_loop = tornado.ioloop.IOLoop.instance()
    logging.debug("MAIN IOLOOP: %s", main_loop)
    main_loop.start()
Run Code Online (Sandbox Code Playgroud)

新年快乐:-D


Mar*_*tos 0

有人报告在这里成功合并了 Tornado 和 Pika 。据我所知,这并不像从 Tornado 中调用 Pika 那么简单,因为两个库都希望拥有自己的事件循环。