How to edit queued processes using && in Linux / bash?
Suppose I have 3 processes in a Linux / Shell queue, separating commands for the respective processes with && like this:
<command1> && <command3> && <command4>
Now that the command for process1 has run, I want to edit the queue:
eg. 1: I want to have another process2 between process1 and process3. So that the updated command queue becomes the following:
<command1> && <command2> && <command3> && <command4>
eg. 2: Or maybe deleting a process, for example. process3 from the queue. So that the updated command queue becomes the following:
<command1> && <command4>
Is there a way to edit the command queue on the fly? that is, when is process1 already running?
source to share
You are talking to bash
(not in the queue), and once the shell checks it, you cannot change it.
It is not clear if you really need the queue, or if you will be tested by simple testing of the exit status, for example:
#!/bin/bash
if cmd1; then
cmd2
else
if cmd3; then
cmd4
fi
fi
(Will run first cmd1
, then depending on its success, run cmd2
if it does, or try running cmd3
and then cmd4
. The last if block could be simplified to cmd3 && cmd4
.)
But if you really want a command queue, you'll have to implement it yourself.
Try 1: simple array queue
The first idea is to store your queue in a shell array variable, for example:
#!/bin/bash
declare -a queue
clear() { queue=(); }
is_empty() (( ! ${#queue[@]} ))
# insert an item at the beginning
insert() {
queue=("$1" "${queue[@]}")
}
# append an item at the end
push() {
queue=("${queue[@]}" "$1")
}
# remove an item from the beginning
pop() {
queue=("${queue[@]:1}")
}
# return the first item, without popping it
peek() {
echo "${queue[0]}"
}
and then use it to run commands like this:
# run commands from the `queue`
run() {
while ! is_empty; do
local cmd=$(peek)
pop
eval "$cmd" || return
done
return 0
}
# run: echo before && sleep 1 && echo after
clear
push 'echo before'
push 'sleep 1'
push 'echo after'
run
But the main problem with this approach is , you cannot change this queue asynchronously . You can change it before starting or while running inside a loop run
, but that is probably not what you are asking for.
You might be thinking why we can't execute commands in the background with run &
. Well, we could, but then the background subprocess (subshell) will get its own copy of the variable queue
, and the modifications you make after spawning will not be reflected in the subprocess run
.
Try 2: a simple file queue
Another approach is to implement your queue in a file, keeping one command per line. This will indeed work, but we will also need to ensure that there is a mutual exclusion mechanism (for example flock
).
While this approach works, I don't like the idea of ββrewriting the complete file every time you need to insert the command at position 0. You can keep the file in memory, but creating it on /dev/shm
(ramdisk) will not work on macOS.
Try 3: Redis Queue
A logical extension of the previous approach is to use an actual shared queue in the queue, for example with Redis .
bash
the wrapper for this Redis based queue would be:
#!/bin/bash
redis=(redis-cli --raw)
clear() {
"${redis[@]}" DEL queue &>/dev/null
}
is_empty() (( $("${redis[@]}" LLEN queue) == 0 ))
# insert the item at the beginning
insert() {
"${redis[@]}" LPUSH queue "$1" &>/dev/null
}
# append the item at the end
push() {
"${redis[@]}" RPUSH queue "$1" &>/dev/null
}
# remove the item from the beginning
pop() {
"${redis[@]}" LPOP queue
}
peek() {
"${redis[@]}" LRANGE queue 0 0
}
show() {
"${redis[@]}" LRANGE queue 0 -1
}
And you can run your first example (with an insert while the previous command was running), for example:
# run commands from the redis queue
run() {
while ! is_empty; do
eval "$(pop)" || return
done
return 0
}
# start with: echo before && sleep 1 && echo after
clear
push 'echo before'
push 'sleep 3'
push 'echo after'
run &
# but then, 1sec after running, modify the queue, insert another command
sleep 1
insert 'echo inserted'
wait
Output example:
$ ./redis-queue-demo.sh
before
inserted
after
Solution for your examples
So, using the Redis approach, your first example (command insertion) would look like this:
clear
push 'echo command1; sleep 2'
push 'echo command3'
push 'echo command4'
run &
sleep 1
insert 'echo command2'
wait
Output:
command1
command2
command3
command4
second example (command deletion):
clear
push 'echo command1; sleep 2'
push 'echo command3'
push 'echo command4'
run &
sleep 1
pop >/dev/null
wait
outputs:
command1
command4
source to share
Instead of running command1 && command2 && command3
in a shell, move the logic to the command that decides what to run. Thus:
command1 --c1-arg1 --c1-arg2 -- command2 --c2-arg1 -- command3
... where each command has logic similar to:
command1() {
local arg1=0 arg2=0
local -a next_command=( ) extra_args=( )
while (( $# )); do
case $1 in
--c1-arg1) arg1=1 ;; # example
--c1-arg2) arg2=1 ;;
--) shift; next_command=( "$@" ); break;
esac
shift
done
# ...now, let say we wish to conditionally add a comand4 to the end of the list:
if (( arg1 && arg2 )); then
if (( ${#next_command[@]} )); then next_command+=( -- )
next_command+=( command4 )
fi
# regardless, shift control to our next command if it exists
(( ${#next_command[@]} )) && exec "${next_command[@]}"
}
This way, you are not trying to change your command to shell state (its parent process), but you just have the command to change your own variable, an array next_command
.
source to share
As people (Barmar and Patrick) have already pointed out, there is no way to manipulate when command 1 is executed.
Once executed, you can check the command status with $? (or) you can use any other logic you are looking for. With which you can create your turn accordingly.
<command1>
if [ $? eq 0 ]
then
#on success
<command2> && <command3> && <command4>
else
#on failure
<command4>
fi
source to share