Daemonizing Celery Beat with systemd
Recently, I came to a situation where I had to write a script that will run periodically and do some jobs. Previously I used cron jobs for these types of jobs. Now I am using Celery for this purpose, and yes, it’s a charm!
There are multiple ways to demonize celery workers or celerybeat. I user systemd as I use Ubuntu 16.04 and systemd is built-in.
Let’s think of a small scenario, we have to run an SQL query that will insert a new entity in every minute(that is not a real-world scenario, though I am using it just for example).
First, we make a directory named simple-celery
and place a file named db_update.py
inside it. Then paste the contents as below.
We can check if the scheduler is working properly by opening two terminals and provide the below two commands in them respectively
celery -A db_update beat --loglevel=info
and
celery -A db_update worker --loglevel=info
This is op for development, but, in production, we need to daemonize these to run them in background. To do so, we will follow the steps
1) We will create a /etc/default/celeryd
configuration file.
2) Now, create another file for the worker /etc/systemd/system/celeryd.service
with sudo
privilege.
And a file for celerybeat /etc/systemd/system/celerybeat.service
with sudo
privilege.
3) Now, we will create log
and pid
directories.
sudo mkdir /var/log/celery /var/run/celery
sudo chown sajid:sajid /var/log/celery /var/run/celery
N.B.: If you hard reboot your pc, you have to
chwon
again.
4) After that, we need to reload systemctl daemon. Remember that, we should reload this every time we make any change to the service definition file.
sudo systemctl daemon-reload
5) To enable the service to start at boot, we will run.
sudo systemctl enable celeryd
sudo systemctl enable celerybeat
6) we can now start the services.
sudo systemctl start celeryd
sudo systemctl start celerybeat
7) To verify that everything is ok, we can check the log files
cat /var/log/celery/beat.log
cat /var/log/celery/worker1.log
8) Now, we are all set up, and here comes the part of monitoring. There are command line options to do that, but we will not frequently use that in production. Instead, we will use a visual Real-time Celery web-monitor, Flower. Let’s install this via pip
.
pip install flower
After installation, go to terminal and type
celery -A db_update flower
After that, visit http://localhost:5555 to get the visual monitor.
N.B.: carefully set the user and the group, the paths of CELERY_BIN, EnvironmentFile, WorkingDirectory, otherwise, we will not be able to run them.
References