Amazon-SQS + Django-Celery创建了数千个队列(每个消息都有一个队列)

mic*_*ael 6 django rabbitmq amazon-sqs celery django-celery

我正在寻找一个地方开始解决这个问题.

以下是settings.py中所做的更改

#Rabbit MQ settings
#===============================================================================
# BROKER_HOST = "localhost"
# BROKER_PORT = 5672
# BROKER_USER = "vei_0"
# BROKER_PASSWORD = "1234"
# BROKER_VHOST = "videoencoder"
#===============================================================================




DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
AWS_ACCESS_KEY_ID = "xxxx"
AWS_SECRET_ACCESS_KEY = "xxxx"
AWS_STORAGE_BUCKET_NAME = "images"
#Amazon SQS settings.
BROKER_TRANSPORT = 'sqs'
BROKER_TRANSPORT_OPTIONS = {
    'region': 'us-east-1',
}
BROKER_USER = AWS_ACCESS_KEY_ID
BROKER_PASSWORD = AWS_SECRET_ACCESS_KEY
CELERY_DEFAULT_QUEUE = 'hardwaretaskqueue'
CELERY_QUEUES = {
    CELERY_DEFAULT_QUEUE: {
        'exchange': CELERY_DEFAULT_QUEUE,
        'binding_key': CELERY_DEFAULT_QUEUE,
    }
}


CELERYD_CONCURRENCY = 2
CELERY_TASK_RESULT_EXPIRES = 120
CELERY_RESULT_BACKEND = "amqp"
Run Code Online (Sandbox Code Playgroud)

今天早上我醒来时发来一条来自亚马逊的消息说:"你的意思是要做出数十亿的排队吗?"

Mar*_*vin 10

使用时,CELERY_RESULT_BACKEND = 'amqp'会为每个结果消息创建一个新队列.为避免这种情况,您可以简单地使用其他CELERY_RESULT_BACKEND数据库,例如数据库或Redis.或者如果您对结果不感兴趣,那么您可以设置CELERY_IGNORE_RESULT = True.