How to detect duplicate auto-resubmiting batch job - VMS

This is a discussion on How to detect duplicate auto-resubmiting batch job - VMS ; Hi everybody I have a auto-resubmiting daily disk cleaning job. It is submitted at startup and should be resubmitting itself after running every day at 8h00. Yesterday, I noticed I now have 3 entries for the same job, all waiting ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 22

Thread: How to detect duplicate auto-resubmiting batch job

  1. How to detect duplicate auto-resubmiting batch job

    Hi everybody
    I have a auto-resubmiting daily disk cleaning job. It is submitted at
    startup and should be resubmitting itself after running every day at
    8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    waiting to execute at the same time. Notwithstanding the fact that
    some startup job is submitting the batch more than once(I will need to
    find it!), I need to add code to my DCL procedure to verify that only
    one job is present.

    What is the usual way to detect that there is only one instance/job/
    batch of the procedure running daily ? What code can I use in DCL to
    verify that I have only one version of the batch job present in the
    queue ? Can you provide examples ?

    Here is a part of my cleaning.com procedure:

    $ datelog = f$cvtime("tomorrow","comparison","date")
    $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    $log:cleaning-'datelog'.log -
    /after="tomorrow+08:" cleaning.com
    $ delete my$log:*.log;*/cre/before=today-30-0
    $ ...

    Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    entries 2 and 3 to kill themselves if there is already a same job
    present in the queue. Notice all 3 will be begin executing in the same
    split second.
    Suggestions ? Examples ? Links to code ?
    TIA
    Van


  2. Re: How to detect duplicate auto-resubmiting batch job

    vancouvercancun@yahoo.ca wrote:
    > Hi everybody
    > I have a auto-resubmiting daily disk cleaning job. It is submitted at
    > startup and should be resubmitting itself after running every day at
    > 8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    > waiting to execute at the same time. Notwithstanding the fact that
    > some startup job is submitting the batch more than once(I will need to
    > find it!), I need to add code to my DCL procedure to verify that only
    > one job is present.
    >
    > What is the usual way to detect that there is only one instance/job/
    > batch of the procedure running daily ? What code can I use in DCL to
    > verify that I have only one version of the batch job present in the
    > queue ? Can you provide examples ?
    >
    > Here is a part of my cleaning.com procedure:
    >
    > $ datelog = f$cvtime("tomorrow","comparison","date")
    > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > $log:cleaning-'datelog'.log -
    > /after="tomorrow+08:" cleaning.com
    > $ delete my$log:*.log;*/cre/before=today-30-0
    > $ ...
    >
    > Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    > entries 2 and 3 to kill themselves if there is already a same job
    > present in the queue. Notice all 3 will be begin executing in the same
    > split second.
    > Suggestions ? Examples ? Links to code ?
    > TIA
    > Van
    >


    $ HELP SET PROCESS /NAME

    No two processes can have the same name so when the second process
    executes SET PROCESS /NAME, it will fail.


  3. Re: How to detect duplicate auto-resubmiting batch job

    On Aug 14, 3:15 pm, vancouvercan...@yahoo.ca wrote:
    > Hi everybody
    > I have a auto-resubmiting daily disk cleaning job. It is submitted at
    > startup and should be resubmitting itself after running every day at
    > 8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    > waiting to execute at the same time. Notwithstanding the fact that
    > some startup job is submitting the batch more than once(I will need to
    > find it!), I need to add code to my DCL procedure to verify that only
    > one job is present.
    >
    > What is the usual way to detect that there is only one instance/job/
    > batch of the procedure running daily ? What code can I use in DCL to
    > verify that I have only one version of the batch job present in the
    > queue ? Can you provide examples ?
    >
    > Here is a part of my cleaning.com procedure:
    >
    > $ datelog = f$cvtime("tomorrow","comparison","date")
    > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > $log:cleaning-'datelog'.log -
    > /after="tomorrow+08:" cleaning.com
    > $ delete my$log:*.log;*/cre/before=today-30-0
    > $ ...
    >
    > Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    > entries 2 and 3 to kill themselves if there is already a same job
    > present in the queue. Notice all 3 will be begin executing in the same
    > split second.
    > Suggestions ? Examples ? Links to code ?
    > TIA
    > Van



    To me, this looks like a problem I'd want to know about. Something is
    not working. You can use the F$GETQUI lexical to get information about
    jobs in the queue, or you can do something like the following (which
    predates the pipe command as well as f$getqui, but like they say, if
    it ain't broke...):

    $
    $ SHOW QUE /ALL /OUT=SYS$SCRATCH:a_b_temp.tmp SYS$BATCH
    $ SEARCH /NOOUTP /NOWARN SYS$SCRATCH:a_b_temp.tmp IMPORTANT_JOB
    $ save_status = $STATUS
    $ DELETE SYS$SCRATCH:a_b_temp.tmp;*
    $ if save_status .eqs. "%X00000001"
    $ then
    $ WRITE SYS$OUTPUT "IMPORTANT_JOB is already submitted."
    $ goto QUIT
    $ endif

    It would be easy to change this to use pipe, or you could learn about
    the strange-workings of F$GETQUI. Building a unique .tmp file name has
    also been left as an exercise.

    You can put your similar logic in the com file's scheduler, or in the
    com file itself to keep it from submitting/starting a duplicate job
    and to tell you there's a problem.


  4. Re: How to detect duplicate auto-resubmiting batch job

    In article <1187122509.791422.293350@l22g2000prc.googlegroups. com>,
    vancouvercancun@yahoo.ca writes:

    > What is the usual way to detect that there is only one instance/job/
    > batch of the procedure running daily ? What code can I use in DCL to
    > verify that I have only one version of the batch job present in the
    > queue ? Can you provide examples ?


    Before the job resubmits itself, it should check whether a job of the
    same name is already submitted and on hold, i.e. a job of the same name
    other than itself.

    Also, it might be a good idea to have a very basic batch job which
    essentially just resubmits itself and calls the main procedure. That
    way, if you change the main procedure, you don't have to resubmit the
    batch job.


  5. Re: How to detect duplicate auto-resubmiting batch job

    In article <46C21AFF.8020203@comcast.net>, "Richard B. Gilbert"
    writes:

    > $ HELP SET PROCESS /NAME
    >
    > No two processes can have the same name so when the second process
    > executes SET PROCESS /NAME, it will fail.


    On the same node. If he submits to a generic queue, with execution
    queues on different nodes, then he could have as many processes as
    nodes. If the entry is retained on error, he still has to do manual
    cleanup. If not, he might miss real errors.


  6. Re: How to detect duplicate auto-resubmiting batch job

    vancouvercancun@yahoo.ca wrote:
    >
    > Hi everybody
    > I have a auto-resubmiting daily disk cleaning job. It is submitted at
    > startup and should be resubmitting itself after running every day at
    > 8h00. Yesterday, I noticed I now have 3 entries for the same job, [snip]


    DCSC$STARTUP has the same problem... :-(


    --
    David J Dachtera
    dba DJE Systems
    http://www.djesys.com/

    Unofficial OpenVMS Marketing Home Page
    http://www.djesys.com/vms/market/

    Unofficial Affordable OpenVMS Home Page:
    http://www.djesys.com/vms/soho/

    Unofficial OpenVMS-IA32 Home Page:
    http://www.djesys.com/vms/ia32/

    Unofficial OpenVMS Hobbyist Support Page:
    http://www.djesys.com/vms/support/

  7. Re: How to detect duplicate auto-resubmiting batch job

    On 08/14/07 15:15, vancouvercancun@yahoo.ca wrote:
    > Hi everybody
    > I have a auto-resubmiting daily disk cleaning job. It is submitted at
    > startup and should be resubmitting itself after running every day at
    > 8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    > waiting to execute at the same time. Notwithstanding the fact that
    > some startup job is submitting the batch more than once(I will need to
    > find it!), I need to add code to my DCL procedure to verify that only
    > one job is present.
    >
    > What is the usual way to detect that there is only one instance/job/
    > batch of the procedure running daily ? What code can I use in DCL to
    > verify that I have only one version of the batch job present in the
    > queue ? Can you provide examples ?
    >
    > Here is a part of my cleaning.com procedure:
    >
    > $ datelog = f$cvtime("tomorrow","comparison","date")
    > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > $log:cleaning-'datelog'.log -
    > /after="tomorrow+08:" cleaning.com
    > $ delete my$log:*.log;*/cre/before=today-30-0
    > $ ...
    >
    > Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    > entries 2 and 3 to kill themselves if there is already a same job
    > present in the queue. Notice all 3 will be begin executing in the same
    > split second.
    > Suggestions ? Examples ? Links to code ?


    Pony up for CA-SCHEDULER (or whatever it's called now).

    --
    Ron Johnson, Jr.
    Jefferson LA USA

    Give a man a fish, and he eats for a day.
    Hit him with a fish, and he goes away for good!

  8. Re: How to detect duplicate auto-resubmiting batch job

    ...
    > > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > > $log:cleaning-'datelog'.log -
    > > /after="tomorrow+08:" cleaning.com


    ...
    >
    > Pony up for CA-SCHEDULER (or whatever it's called now).
    >


    Your issue may be related to the /restart; I've seen that happen
    when the system is rebooted.

    It's not hard to code a few lines of DCL to check if the job is
    already in the queue, and to resubmit if it is not there. You
    will want to learn how to use f$getqui, which has an interesting
    interface.

    For this kind of job (a mini-batch self-scheduler) I use the
    extension of .shell, rather than .com. Typically, the DCL
    code needed is short, and does not change. Everything else
    goes into a procedure with the same name, and .com
    extension. That way, when something changes, it's usually
    in the .com procedure, and the next time the batch job runs,
    it gets picked up. Otherwise, you have to delete the current
    entry and resubmit it, since the queue manager stores the
    File ID, not the file name. The .shell job just invokes the
    @procname.com after the resubmit.

    HTH,

    Carl

  9. Re: How to detect duplicate auto-resubmiting batch job

    On Aug 14, 4:15 pm, vancouvercan...@yahoo.ca wrote:
    > Hi everybody
    > I have a auto-resubmiting daily disk cleaning job. It is submitted at
    > startup and should be resubmitting itself after running every day at
    > 8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    > waiting to execute at the same time. Notwithstanding the fact that
    > some startup job is submitting the batch more than once(I will need to
    > find it!), I need to add code to my DCL procedure to verify that only
    > one job is present.
    >
    > What is the usual way to detect that there is only one instance/job/
    > batch of the procedure running daily ? What code can I use in DCL to
    > verify that I have only one version of the batch job present in the
    > queue ? Can you provide examples ?
    >
    > Here is a part of my cleaning.com procedure:
    >
    > $ datelog = f$cvtime("tomorrow","comparison","date")
    > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > $log:cleaning-'datelog'.log -
    > /after="tomorrow+08:" cleaning.com
    > $ delete my$log:*.log;*/cre/before=today-30-0
    > $ ...
    >
    > Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    > entries 2 and 3 to kill themselves if there is already a same job
    > present in the queue. Notice all 3 will be begin executing in the same
    > split second.
    > Suggestions ? Examples ? Links to code ?
    > TIA
    > Van


    Since you don't know what is submitting extra instances of your job,
    you'll have to do it from your job ***while it's starting***. I'd do
    something like this:

    Near the top of your procedure, before it does anything "significant":

    *** NOT TESTED!!! ***

    $ RUNDATE = F$CVTIME(,,"DATE")
    $ ON ERROR THEN GOTO _DATE_ERROR
    $ COPY NL: SYS$SCRATCH:CLEANING.'RUNDATE';1
    $! (put your usual error checking ON statement here)
    ..
    ..
    ..
    $_DATE_ERROR:
    $ SET NOON
    $ WRITE SYS$OUTPUT "Another instance already started for this date"
    $ EXIT %X18008002

    *** NOT TESTED!!! ***

    If SYS$SCRATCH:CLEANING.'RUNDATE';1 already exists, another instance
    of your job created it. Trying to copy to the same version produces an
    error upon which you gracefully exit. Of course you'll have to arrange
    for something to clean up these CLEANUP.'RUNDATE';1 files, but not too
    early!

    NOTE: Searching for other instances of your job may be problematic for
    a variety of reasons, not the least of which is that they might all
    start at very nearly the same time and that it will of course find
    itself! If you search for other instances of your job before it
    resubmits itself, you'll have to ignore the current running instance
    and it will offer no protection against that other rogue job (a system-
    startup procedure?) from submitting extras. The best thing to do, of
    course, is to track down why you're getting extra jobs in the first
    place and fix it.

    AEF


  10. Re: How to detect duplicate auto-resubmiting batch job

    On Aug 15, 7:47 am, "Carl Friedberg" wrote:
    > ..
    >
    > > > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > > > $log:cleaning-'datelog'.log -
    > > > /after="tomorrow+08:" cleaning.com

    >
    > ..
    >
    >
    >
    > > Pony up for CA-SCHEDULER (or whatever it's called now).

    >
    > Your issue may be related to the /restart; I've seen that happen
    > when the system is rebooted.


    This could well be if the system reboots, or the job is requeued,
    after the job resubmits itself but before it exits. Judicious use of
    the BATCH$RESTART symbol is required to avoid that.

    >
    > It's not hard to code a few lines of DCL to check if the job is
    > already in the queue, and to resubmit if it is not there. You
    > will want to learn how to use f$getqui, which has an interesting
    > interface.


    But the job will be there. It's running! And that doesn't protect from
    an unknown submitter submitting another instance of the job at a later
    time.

    > For this kind of job (a mini-batch self-scheduler) I use the
    > extension of .shell, rather than .com. Typically, the DCL
    > code needed is short, and does not change. Everything else
    > goes into a procedure with the same name, and .com
    > extension. That way, when something changes, it's usually
    > in the .com procedure, and the next time the batch job runs,
    > it gets picked up. Otherwise, you have to delete the current
    > entry and resubmit it, since the queue manager stores the
    > File ID, not the file name. The .shell job just invokes the
    > @procname.com after the resubmit.
    >
    > HTH,
    >
    > Carl




  11. Re: How to detect duplicate auto-resubmiting batch job

    Or use *non*-autosubmitting jobs.
    I uses a CRON-like tool called CRON :-)

    It's a DCL routine running as a detached
    process checking a config (text) file each
    even hour and anyware where the is is match
    (on the hour/weekday/day-of-month fields) the
    command (usualy a SUBMIT) on the same line is
    executed.

    Rock-sold. Starts at boot and never dies. :-)

    One odd thing...

    SHOW SYS says "Uptime 584 17:08" but the
    SHOW PROC/ACC on the CRON process says
    "Connect time 584 18:59". So the CRON process
    has been running for aprox 2 hours *more*
    then the currect uptime :-) :-)

    Anyway, I have currently aprox 20 SUBMIT's in
    the CRONTAB.DAT which is scanned each hour.

    Main pros :

    - Easy to "list" all regular batch jobs. Just
    TYPE the text file. No need to run SEARCH over
    all protential COM files...

    - Easy to change (or disable) any batch job. Just
    delete or comment the line in the text file. Or
    change the hour-field or whatever.

    The text file is re-read each cycle, so there is no
    need to re-start anything after an edit.





  12. Re: How to detect duplicate auto-resubmiting batch job

    On Aug 14, 4:15 pm, vancouvercan...@yahoo.ca wrote:
    > Hi everybody
    > I have a auto-resubmiting daily disk cleaning job. It is submitted at
    > startup and should be resubmitting itself after running every day at
    > 8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    > waiting to execute at the same time. Notwithstanding the fact that
    > some startup job is submitting the batch more than once(I will need to
    > find it!), I need to add code to my DCL procedure to verify that only
    > one job is present.
    >
    > What is the usual way to detect that there is only one instance/job/
    > batch of the procedure running daily ? What code can I use in DCL to
    > verify that I have only one version of the batch job present in the
    > queue ? Can you provide examples ?
    >
    > Here is a part of my cleaning.com procedure:
    >
    > $ datelog = f$cvtime("tomorrow","comparison","date")
    > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > $log:cleaning-'datelog'.log -
    > /after="tomorrow+08:" cleaning.com
    > $ delete my$log:*.log;*/cre/before=today-30-0
    > $ ...
    >
    > Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    > entries 2 and 3 to kill themselves if there is already a same job
    > present in the queue. Notice all 3 will be begin executing in the same
    > split second.
    > Suggestions ? Examples ? Links to code ?
    > TIA
    > Van


    Thanks to all that responded with their suggestions. Since my jobs try
    to execute at the same time, I chose to add the set process/name=XXX
    command and check the $status. Looks like the following:

    $ Set NoOn
    $ Set Process/Name="CLEANING"
    $ if( .not. $status )
    $ then
    $ write sys$output " Duplicate process name error !"
    $ exit
    $ endif
    $ submit ....
    $ delete ...
    I will now search why I have more than 1 job. I am pretty sure the
    startup procedure only submits one job. Could it be that the job is
    kept in the queue after a reboot ?

    Van


  13. Re: How to detect duplicate auto-resubmiting batch job

    vancouvercancun@yahoo.ca wrote:
    > startup procedure only submits one job. Could it be that the job is
    > kept in the queue after a reboot ?


    Yes. Jobs persist across reboots. So you'd need to first scan the queue
    using lexicals (F$GETQUI) to see if the job is already waiting to
    execute, and only submit it if it is not found. (and make sure you use
    SUBMIT/NAME=xxx so you can scan for a job name xxx already in the queue)

    The other suggestion to just exit if you can't set the process name is
    much simpler to implement. However, this works only if the jobs execute
    at a specific time. (eg: submit/after="tomorrow+00:01:00") So it doesn't
    matter when they are submitted, they will execute at 1 am the next day.

    If your jobs are submitted with /after="+23:00:00", then you could still
    have multiple jobs in the queue. Each job will execute again 23 hours
    later and multible jobs would execute at different times and never
    "bump" into each other.

  14. Re: How to detect duplicate auto-resubmiting batch job

    In article <1187122509.791422.293350@l22g2000prc.googlegroups. com>, vancouvercancun@yahoo.ca writes:
    >
    > What is the usual way to detect that there is only one instance/job/
    > batch of the procedure running daily ? What code can I use in DCL to
    > verify that I have only one version of the batch job present in the
    > queue ? Can you provide examples ?
    >


    I haven't seen that problem. Is someone/something doing extra
    queue submissions? Looking at your snapshot it's possible that
    restarts happened after the submit and before the original job
    ended, you should consider updating the restart parameter on
    the first line after the submit (I do).

    It is possible to query the queue manager for other entries of
    the same .COM file and for the queue entry number. What I would
    do if this problem continues is write the script to look for such
    and then kill itself if it isn't the lowest entry number (order
    isn't really important, just a hook to identify and not kill exactly
    one job).


  15. Re: How to detect duplicate auto-resubmiting batch job

    vancouvercancun@yahoo.ca wrote:
    > On Aug 14, 4:15 pm, vancouvercan...@yahoo.ca wrote:
    >
    >>Hi everybody
    >>I have a auto-resubmiting daily disk cleaning job. It is submitted at
    >>startup and should be resubmitting itself after running every day at
    >>8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    >>waiting to execute at the same time. Notwithstanding the fact that
    >>some startup job is submitting the batch more than once(I will need to
    >>find it!), I need to add code to my DCL procedure to verify that only
    >>one job is present.
    >>
    >>What is the usual way to detect that there is only one instance/job/
    >>batch of the procedure running daily ? What code can I use in DCL to
    >>verify that I have only one version of the batch job present in the
    >>queue ? Can you provide examples ?
    >>
    >>Here is a part of my cleaning.com procedure:
    >>
    >>$ datelog = f$cvtime("tomorrow","comparison","date")
    >>$ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    >>$log:cleaning-'datelog'.log -
    >> /after="tomorrow+08:" cleaning.com
    >>$ delete my$log:*.log;*/cre/before=today-30-0
    >>$ ...
    >>
    >>Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    >>entries 2 and 3 to kill themselves if there is already a same job
    >>present in the queue. Notice all 3 will be begin executing in the same
    >>split second.
    >>Suggestions ? Examples ? Links to code ?
    >>TIA
    >>Van

    >
    >
    > Thanks to all that responded with their suggestions. Since my jobs try
    > to execute at the same time, I chose to add the set process/name=XXX
    > command and check the $status. Looks like the following:
    >
    > $ Set NoOn
    > $ Set Process/Name="CLEANING"
    > $ if( .not. $status )
    > $ then
    > $ write sys$output " Duplicate process name error !"
    > $ exit
    > $ endif
    > $ submit ....
    > $ delete ...
    > I will now search why I have more than 1 job. I am pretty sure the
    > startup procedure only submits one job. Could it be that the job is
    > kept in the queue after a reboot ?
    >
    > Van
    >


    A reboot does not clear a batch queue! Of course someone could have
    added code to the startup to do so but it doesn't happen by default!



  16. Re: How to detect duplicate auto-resubmiting batch job

    On Aug 15, 10:25 am, koeh...@eisner.nospam.encompasserve.org (Bob
    Koehler) wrote:
    > In article <1187122509.791422.293...@l22g2000prc.googlegroups. com>, vancouvercan...@yahoo.ca writes:
    >
    > > What is the usual way to detect that there is only one instance/job/
    > > batch of the procedure running daily ? What code can I use in DCL to
    > > verify that I have only one version of the batch job present in the
    > > queue ? Can you provide examples ?

    >
    > I haven't seen that problem. Is someone/something doing extra
    > queue submissions? Looking at your snapshot it's possible that
    > restarts happened after the submit and before the original job
    > ended, you should consider updating the restart parameter on
    > the first line after the submit (I do).
    >
    > It is possible to query the queue manager for other entries of
    > the same .COM file and for the queue entry number. What I would
    > do if this problem continues is write the script to look for such
    > and then kill itself if it isn't the lowest entry number (order
    > isn't really important, just a hook to identify and not kill exactly
    > one job).


    And run SHOW ENTRY/FULL on the entry numbers for your multiple
    instances to see what time they were submitted. Might be a useful clue
    -- might not.

    AEF


  17. Re: How to detect duplicate auto-resubmiting batch job

    On Aug 15, 7:54 am, vancouvercan...@yahoo.ca wrote:
    > On Aug 14, 4:15 pm, vancouvercan...@yahoo.ca wrote:
    >
    >
    >
    > > Hi everybody
    > > I have a auto-resubmiting daily disk cleaning job. It is submitted at
    > > startup and should be resubmitting itself after running every day at
    > > 8h00. Yesterday, I noticed I now have 3 entries for the same job, all
    > > waiting to execute at the same time. Notwithstanding the fact that
    > > some startup job is submitting the batch more than once(I will need to
    > > find it!), I need to add code to my DCL procedure to verify that only
    > > one job is present.

    >
    > > What is the usual way to detect that there is only one instance/job/
    > > batch of the procedure running daily ? What code can I use in DCL to
    > > verify that I have only one version of the batch job present in the
    > > queue ? Can you provide examples ?

    >
    > > Here is a part of my cleaning.com procedure:

    >
    > > $ datelog = f$cvtime("tomorrow","comparison","date")
    > > $ submit /queue=sys$batch /noprinter /nonotify /restart /log=my
    > > $log:cleaning-'datelog'.log -
    > > /after="tomorrow+08:" cleaning.com
    > > $ delete my$log:*.log;*/cre/before=today-30-0
    > > $ ...

    >
    > > Yesterday, I had 3 entries waiting to execute at 8h00. I would like
    > > entries 2 and 3 to kill themselves if there is already a same job
    > > present in the queue. Notice all 3 will be begin executing in the same
    > > split second.
    > > Suggestions ? Examples ? Links to code ?
    > > TIA
    > > Van

    >
    > Thanks to all that responded with their suggestions. Since my jobs try
    > to execute at the same time, I chose to add the set process/name=XXX
    > command and check the $status. Looks like the following:
    >
    > $ Set NoOn
    > $ Set Process/Name="CLEANING"
    > $ if( .not. $status )
    > $ then
    > $ write sys$output " Duplicate process name error !"
    > $ exit
    > $ endif
    > $ submit ....
    > $ delete ...
    > I will now search why I have more than 1 job. I am pretty sure the
    > startup procedure only submits one job. Could it be that the job is
    > kept in the queue after a reboot ?



    Which is why have to do something special for startup.

    Sorry for my overly simplified example before. If you allow me to
    expand on it a bit:

    I prefer to use a single submitter/scheduler in startup to make life
    easier. You simply have it define/sys a logical that contains the
    start-time or the value of your choice and if it finds the logical
    already set, it just exits.

    For no reason other than personal preference, I don't like to use SET
    PROC/NAME. Instead, I use the SUBMIT/NAME=name.

    By making the name significant and unique, you can check it *before*
    you submit it. In my lame example, IMPORTANT_JOB is actually 4
    characters that signify the type of job (BUP_, CLN_, etc), and are
    also used as the first 4 characters of the "real" file name. I've
    found 3 or 4 characters are enough to at least hint at the job's
    purpose in show/sys.

    The scheduler (or self-submitting job) will parse the submit/after's
    datetime to an 11 character string, and append that to the prefix.

    nb: I've set a base year of 1900 and use, for a job scheduled to run
    tomorrow for example, f$cvtime("TOMORROW",,"YEAR") - 1900 to get a 3
    digit year string to gain an extra character in the prefix (I'll let
    whoever's around in 2899 worry about roll-over;-)

    I next parse out MMDDHHMM and append that to the prefix+year. So a
    Backup job scheduled for 15-AUG-2007 01:00 will be named
    BUP_10708150100. If more than one job needs to run , I'd use BU1_,
    BU2_ or BUPA, BUPB, etc.

    I also use submit's /PARAM= option on many jobs to test for special
    instructions. Also, some jobs I might want to run manually between
    scheduled batch runs. So, if the job finds itself running
    interactively, it checks P1 .eq. "NOW" (or whatever) and steps into
    some prompting, skipping the batch stuff.

    Hope that makes more sense to you. Sorry about the earlier over-
    editing.


  18. Re: How to detect duplicate auto-resubmiting batch job

    On Aug 15, 3:55 pm, Doug Phillips wrote:
    > On Aug 15, 7:54 am, vancouvercan...@yahoo.ca wrote:


    >
    >
    >
    > I've
    > found 3 or 4 characters are enough to at least hint at the job's
    > purpose in show/sys.
    >


    Before I get pedantated, I of course meant SHOW QUE.


  19. Re: How to detect duplicate auto-resubmiting batch job

    On 8/15/07, AEF wrote:
    > But the job will be there. It's running! And that doesn't protect from
    > an unknown submitter submitting another instance of the job at a later
    > time.
    >


    There's always your job. If that's the only one, you resubmit.
    If there's another one, you don't resubmit. That logic is
    in the .shell.

    Carl

  20. Re: How to detect duplicate auto-resubmiting batch job

    Not a method of detecting multiple copies of a job, but if you set a
    /JOB_LIMIT=1 on the queue it's executing on, you can prevent more than
    one copy from running at the same time.


+ Reply to Thread
Page 1 of 2 1 2 LastLast