Eri*_*ert 9 ruby-on-rails mongodb sidekiq
我有一个后台工作,在MongoDB上执行map/reduce作业.当用户向文档发送更多数据时,它会踢出在文档上运行的后台作业.如果用户发送多个请求,它将启动同一文档的多个后台作业,但只有一个真正需要运行.有没有办法可以阻止多个重复的实例?我想在为每个文档创建一个队列,并确保在提交新作业之前它是空的.或者也许我可以设置一个与我的文档ID相同的作业ID,并在提交之前检查是否存在?
另外,我刚刚找到了一个sidekiq-unique-jobs gem.但文档不存在.这样做我想要的吗?
crf*_*ftr 12
我最初的建议是这个特定工作的互斥量.但是因为有可能你有多个应用程序服务器在sidekiq工作,我会在redis级别提出建议.
例如,在sidekiq worker定义中使用redis-semaphore. 一个未经测试的例子:
def perform
s = Redis::Semaphore.new(:map_reduce_semaphore, connection: "localhost")
# verify that this sidekiq worker is the first to reach this semaphore.
unless s.locked?
# auto-unlocks in 90 seconds. set to what is reasonable for your worker.
s.lock(90)
your_map_reduce()
s.unlock
end
end
def your_map_reduce
# ...
end
Run Code Online (Sandbox Code Playgroud)
https://github.com/krasnoukhov/sidekiq-middleware
UniqueJobs为工作提供独特性.
用法
示例工人:
class UniqueWorker
include Sidekiq::Worker
sidekiq_options({
# Should be set to true (enables uniqueness for async jobs)
# or :all (enables uniqueness for both async and scheduled jobs)
unique: :all,
# Unique expiration (optional, default is 30 minutes)
# For scheduled jobs calculates automatically based on schedule time and expiration period
expiration: 24 * 60 * 60
})
def perform
# Your code goes here
end
end
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
7090 次 |
最近记录: |