我有一个使用 uwsgi(有 10 个工人)+ ngnix 运行的 django 应用程序。我正在使用 apscheduler 进行调度。每当我安排一项工作时,它都会被多次执行。从这些答案ans1,ans2我了解到这是因为调度程序是在uwsgi 的每个工作人员中启动的。我通过按照本答案中的建议将调度程序绑定到套接字并在数据库中保持状态来对调度程序进行有条件的初始化,以便仅启动一个调度程序实例,但仍然存在相同的问题,有时在创建时也存在发现调度程序未运行的作业并且该作业保持挂起且未执行。
我正在使用以下代码在 django 应用程序的 url 中初始化 apscheduler。这将在应用程序启动时启动调度程序。
def job_listener(ev):
print('event',ev)
job_defaults = {
'coalesce': True,
'max_instances': 1
}
scheduler = BackgroundScheduler(job_defaults=job_defaults, timezone=TIME_ZONE, daemon=False)
scheduler.add_jobstore(MongoDBJobStore(client=client), 'default')
scheduler.add_executor(ThreadPoolExecutor(), 'default')
scheduler.add_executor(ProcessPoolExecutor(),'processpool')
scheduler.add_listener(job_listener)
def initialize_scheduler():
try:
if scheduler_db_conn.find_one():
print('scheduler already running')
return True
scheduler.start()
scheduler_db_conn.save({'status': True})
print('---------------scheduler started --------------->')
return True
except:
return False
Run Code Online (Sandbox Code Playgroud)
我使用以下代码来创建作业。
from scheduler_conf import scheduler
def create_job(arg_list):
try:
print('scheduler status-->',scheduler.running)
job = …Run Code Online (Sandbox Code Playgroud) 有时重新启动 celerybeat 后,我会收到以下错误,我已将 celerybeat 设置为 redis 的服务,
sude service celerybeat restart
Run Code Online (Sandbox Code Playgroud)
下面是异常跟踪
Traceback (most recent call last):
File "/home/ec2-user/pyenv/local/lib/python3.4/site-packages/celery/beat.py", line 484, in start
time.sleep(interval)
File "/home/ec2-user/pyenv/local/lib/python3.4/site-packages/celery/apps/beat.py", line 148, in _sync
beat.sync()
File "/home/ec2-user/pyenv/local/lib/python3.4/site-packages/celery/beat.py", line 493, in sync
self.scheduler.close()
File "/home/ec2-user/pyenv/local/lib/python3.4/site-packages/redbeat/schedulers.py", line 272, in close
self.lock.release()
File "/home/ec2-user/pyenv/local/lib/python3.4/site-packages/redis/lock.py", line 135, in release
self.do_release(expected_token)
File "/home/ec2-user/pyenv/local/lib/python3.4/site-packages/redis/lock.py", line 264, in do_release
raise LockError("Cannot release a lock that's no longer owned")
redis.exceptions.LockError: Cannot release a lock that's no longer owned
During handling of the …Run Code Online (Sandbox Code Playgroud)