Solvedbull Empty and clean jobs

Hi,

I'm using bull in my Node.js application. It works good, but sometimes appears issues with repeated jobs especially on application restarts. In my case one or more repeated job don't runs. I've found that I can fix this issue by clearing Redis (flushdb) and restarting application. Then everything works good.

So I've decided to clean all queues on application start. I've found emply and clean methods in documentation. However, it is not clear for me how do they work with Redis. Should they clear Redis database after execution?

I've tried emply / clean methods and they don't clean my Redis db. So I have duplicates:

127.0.0.1:6379[1]> keys *
 1) "bull:vip:repeat"
 2) "bull:notify:repeat:vk-scheduler:notify-scheduler:1506930900000"
 3) "bull:notify:repeat:vk-scheduler:notify-scheduler:1506930600000"
 4) "bull:vip:repeat:scheduler:vip-scheduler:1506942000000"
 5) "bull:vip:repeat:scheduler:vip-scheduler:1506930828000"
 6) "bull:vip:id"
 7) "bull:notify:id"
 8) "bull:notify:repeat"
 9) "bull:notify:repeat:cleaner:notify-cleaner:1506996000000"
10) "bull:notify:repeat:vk-scheduler:notify-scheduler:1506930300000"
11) "bull:vip:repeat:scheduler:vip-scheduler:1506930880000"

In this case vip-scheduler should run two times per day (0 0 2,12 * * *), but there are duplicates in Redis. So I'm not sure how many times it will be executed. Does it matter what stores in Redis database?

queue.add('scheduler', {}, { jobId: 'vip-scheduler', repeat: { cron: '0 0 2,12 * * *' }, removeOnComplete: true, removeOnFail: true });
queue.process('scheduler', vipWorker.scheduler);

Thanks

34 Answers

✔️Accepted Answer

a solution I used that doesnt involve adding another library is this:

queue.clean(0, 'delayed');
queue.clean(0, 'wait');
queue.clean(0, 'active');
queue.clean(0, 'completed');
queue.clean(0, 'failed');

let multi = queue.multi();
multi.del(queue.toKey('repeat'));
multi.exec();

Other Answers:

How about queue.clear() to clear out all jobs from a queue?

Ok, so I am trying to make sense of this issue because in this thread there is a mixture of different issues and there are also false expectations of some apis. I will try to clarify:
queue.empty empties the "queue", meaning that all jobs that are waiting to be processed are discarded, maybe the name is not a very good one, but that is that it does. It is not a wipe all kind of thing.
queue.clean removes jobs from a given "status", for example you can remove all the jobs that are completed, failed, delayed, etc. It would be possible to implement empty with clean by calling clean several times on the "status" a job can be when waiting to be processed, such as "wait", "paused", "delayed", "priority".

Finally, repeatable jobs are a special type of job that creates an entry in the "repeat" zset. As long as an entry representing a given repeatable job is in this set, the job will repeat according to its cron values, so in order to remove it you need to use the queue.removeRepeatable method, as stated in the documentation.

If the same repeatable job is added several times it should result in a noop, otherwise it is a bug. In the version of bull at the time of this writing (3.4.7) there is no know issue regarding removing repeatable jobs.

I am willing to improve the empty and clean functionalities, it would require a major version though.

I will close this thread for now and if based on the information above you find an inconsistent behaviour please open a new issue and I will work on it as soon as possible.

Subscribing here - I've been wondering how to approach this same issue for a while now, as every application restart kept inserting duplicate repeat jobs into redis. Thanks for looking into this!

@Sicria or simply use Promise.all([queue.clean(),...])

More Issues: