5

currently i'm trying to do something like this:

  • 1 cron job backup_daily
  • 2 cron job backup_weekly
  • 3 cron job backup_monthly

so, what might happen is that for example daily and weekly will run at the same day. atleast one of them will fail since the file which should be backed-up is locked by the other backup process. so, one simple solution would be to run the jobs at different times. but since we can't say exacly how long a job will take, thats kinda ugly.

so what i was thinking about is a proxy script, so instead of doing the above cron jobs i would do something like

  • 1 cron job check_if_anybackup_is_running_and_run_backup_daily_else_wait_till_finished
  • 2 cron job check_if_anybackup_is_running_and_run_backup_weekly_else_wait_till_finished
  • 3 cron job check_if_anybackup_is_running_and_run_backup_monthly_else_wait_till_finished

then the only thing i would care about is that they start with a offset of any time so they dont block each other. also the "wait" time should be choosen wisely so they dont "recheck" at the same moment and again block each other (with 3 processes we could use a offset +1 for proccess weekly, +2 for proccess monthly and even / uneven counters for the recheck time)

however, i'm not sure how to realize this with a linux script. i'm not sure what the "right" procedure would be. use a lockfile which is created at process start and check that? and what happens if it's locked? is it a "good" method to just use sleep and check the lock file again after time X? I'm not sure about what happens when i use sleep on a linux thread - i mean: is the "counter" scheduled and using up processor power or is there some kind of interrupt that the processor will send to the process waiting after a time X (like "event based"?) are there any nicer methods you could think of? some linux script code snippets would be perfect since thats not really the thing i ever did before

4

1 に答える 1

6

I would combine all three scripts into one with different parameters. Like do_backup.sh daily.

You're correct about using same lock file. I would go further and write your process PID in this lock file. This way if your process sees that lockfile exists it doesn't just bail out, but check is process who create this file still running. This way even if your process crashes and doesn't delete lock file - your whole system is still safe.

Here is sample I use in my scripts to ensure only one copy is running at the same time:

#!/bin/sh

PID_FILE=$0.pid
[ -f $PID_FILE ] && {
   pid=`cat $PID_FILE`
   ps -p $pid && {
      echo Already running...
      exit
   }
   rm -rf $PID_FILE
}

echo $$ > $PID_FILE

And then in your script you would just include this file

source pid.sh
于 2012-06-24T17:10:26.700 に答える