Twitter

Some bash tips -- 4 -- Prevent concurrent executions of a script

This blog is part of a bash tips list I find useful to use on every script -- the whole list of it can be found here.

Now that you have followed many tips to improve your shell scripting skills :), you will write useful scripts which will then be frequently used and one day you will have to implement a feature to avoid concurrent executions of your script -- this often happens with backup scripts for example.

A bad solution to achieve this goal I often saw implemented is to count the number of process of your script running like:
$ cat backup.sh
#!/bin/bash
ps -ef | grep backup.sh | grep -v grep
$ ./backup.sh
fred       412   167  0 22:51 tty3     00:00:00 /bin/bash ./backup.sh
$
So here, we grep the execution(s) of the script (backup.sh) and avoid the grep which grep it; then usually, when one implements this solution, they consider that a concurrent script is running if this number of running process is >= 1. But . . . what if an extra execution of the script is starting at the same time ? what if a script named another_backup.sh script is running ? we would then grep 2 processes which will be more than 1 then we would wrongly consider that a concurrent script is already running.
$ ./backup.sh
fred       397   167  0 22:52 tty3     00:00:00 /bin/bash ./another_backup.sh  <== ?
fred       400   167  0 22:52 tty3     00:00:00 /bin/bash ./backup.sh
$
We could obviously grep ^backup.sh$ to be sure not to grep another_backup.sh but you cannot prevent anyone else executing his own backup.sh script from another location to backup something different than what you want to backup; on top of that, what if another user also runs the script (which let's say is allowed):
$ ./backup.sh
root       452   433  0 22:59 tty4     00:00:00 /bin/bash ./backup.sh  <== another execution by another user
fred       454   167  0 22:59 tty3     00:00:00 /bin/bash ./backup.sh
$
. . . in short, this is not the best solution to prevent concurrent executions of a script.

A good and robust solution is to:
  1. Check in a "lockfile" the process number of the previous execution
  2. Verify if the previous process is still running
  3. If yes, we stop here
  4. If not, we save the process of the current execution in the "lockfile" and we continue executing the script
As we saw earlier, ps | grep is not the best way of checking the point 2; a stonger way is to use kill with the -0 option which returns 0 if a process is running and something different than 0 if the process no longer exists:
$ kill -0 167
$ echo $?
0  <== process exists
$ kill -0 168
-bash: kill: (168) - No such process  <== process does not exist any more
$ echo $?
1
$
A complete piece of code using a lockfile and kill -0 to check if the process still exists is as below:
      TS="date "+%Y-%m-%d_%H:%M:%S""   # A timestamp for a nice outut in a logfile
LOCKFILE="${HOME}/.lockfile"           # The lockfile
if [[ -s ${LOCKFILE} ]]; then          # File exists and is not empty
    if ! kill -0 $(cat ${LOCKFILE}) 2> /dev/null; then    # pid does not exist
        echo "$($TS) [WARNING] The lockfile ${LOCKFILE} exists but the pid it refers to ($(cat ${LOCKFILE})) does not exist any more, we can then safely ignore it and continue."
    else                                                  # pid exists
        echo "$($TS) [ERROR] Concurrent execution detected; cannot continue."
        exit 2
    fi
fi
echo $$ > "${LOCKFILE}"                # Update the lockfile with the current PID
. . .
There you go, you have a very robust way to avoid 2 concurrent executions of the same script. If you want to allow a different user to use the same script concurrently as your user, put the lockfile in the home directory ($HOME) of each user and if you want to prevent any other user to execute the same script as you, just use a lockfile located in a directory which can be accessed by anyone -- easy-peasy.

I have used this mechanism more than once with great success -- I strongly recommend to use it to prevent concurrent executions of the same script.


< Previous bash tip / Next bash tip >

2 comments:

  1. This is a simple problem that's surprisingly difficult to get right. And unless the get/set is atomic, there's an inherent race condition. For example, file I/O can hang (hello NFS!), and multiple copies of a script can queue up. When the I/O condition is resolved, they all check and create a lockfile at the same time, and proceed to run simultaneously. It's a BASH FAQ (http://mywiki.wooledge.org/BashFAQ/045) with a few safer suggestions.

    ReplyDelete
    Replies
    1. The solution shown in this blog should be working in 99.9% of the cases and more extreme scenarios may indeed require different ways of managing these concurrent executions. http://mywiki.wooledge.org/BashFAQ/045 does not seem to offer a perfect solution as they all seem to have a risk. May be a random sleep in microseconds before acquiring the lock would help and/or I would use a MySQL/MariaDB database to keep that process number and not an OS file -- a non eventual consistent database would prevent the potential issues described in BashFAQ; it would also offer the feature of preventing different servers for having concurrent executions of the same script as well.
      I would not go with NFS to have this lockfile saved, NFS is not really known for its reliability :)

      Delete

OCI: Datapump between 23ai ADB and 19c ADB using database link

Now that we know how to manually create a 23ai ADB in OCI , that we also know how to create a database link between a 23ai ADB and a 19C AD...