I've been asked to look in on an OSR 5.0.6 site where for the last few
days, /etc/cron dies and has to be restarted.
The site has several dozen entries in assorted crontab tables; someone
installed the following as /usr/lib/cron/queuedefs:
I've pored over the queuedefs man page, but I do not really grasp the
implications of those entries. If I turn on cron logging, I do see far
more log entries like this one:
! c queue max run limit reached Wed Mar 5 18:39:00 2008
! rescheduling a cron job Wed Mar 5 18:39:00 2008
than entries reflecting actual launching of tasks.
Jean-Pierre Radley wrote:[color=blue]
> I've been asked to look in on an OSR 5.0.6 site where for the last few
> days, /etc/cron dies and has to be restarted.
> The site has several dozen entries in assorted crontab tables; someone
> installed the following as /usr/lib/cron/queuedefs:
> I've pored over the queuedefs man page, but I do not really grasp the
> implications of those entries. If I turn on cron logging, I do see far
> more log entries like this one:
> ! c queue max run limit reached Wed Mar 5 18:39:00 2008
> ! rescheduling a cron job Wed Mar 5 18:39:00 2008
> than entries reflecting actual launching of tasks.
I've used UNIX since 1973 at AT&T (Unix system III). I've been the
administrator on SCO Xenix systems since 1987. I've been administering
and working with SCO UNIX since 1995, I've never used AT or BATCH jobs.
I only say this to point out that with UNIX you can learn something
new every day!
From the batch man page:
Places the specified job in a queue denoted by letter,
where letter is any lowercase letter from ``a'' to ``z''.
The queue letter is appended to the job identifier. The
following letters have special significance:
For more information on the use of different queues, see
the queuedefs(F) manual page.
I think that whoever set up the /usr/lib/cron/queuedefs file made a
mistake by adding the c.1j2n60w line. The 1j will likely override
the default of 100 jobs queued by cron. On my 5.0.7 the default
queuedefs only has the following:
# less /usr/lib/cron/queuedefs
Then I got to thinking "why the d, e, f, g, h, etc... entry's"?
Googling queuedefs on c.u.s.m turns up:
> Newsgroups: comp.unix.sco.misc
> From: Mark <mark...@my-deja.com>
> Date: 2000/03/21
> Subject: Re: Batch Queues
> In article <38D7FE2A.2BF94...@aplawrence.com>,
> Tony Lawrence <t...@aplawrence.com> wrote:[color=green]
>> Mark wrote:[/color]
>> > My company recently converted off of a Prime system to a IBM NetFinity
>> > box running SCO Openserver 5.0.5. The problem I'm now experiencing is
>> > with the batch queue. I currently only have the three standard queues,
>> > AT, BATCH & CRON. I would like to split my user community requests to
>> > multiple batch queues but I've been unsuccessful in finding out how I
>> > can do this. Can anyone offer some help in this endeavor??[/color][/color]
>> Not sure I understand what you want- you do understand that
>> each user has their own separate queue for each of those?
>> And that the root user can put jobs out to be run by any
>> user? See [url]http://aplawrence.com/Unixart/cron.html[/url] also.
>> Tony Lawrence (t...@aplawrence.com)[/color]
> What I'm trying to do is allow certain groups of users to send all of
> their batch requests to one queue while others go to another. We
> currently have all requests going to queue b. Unfortunately, if we get
> a couple of large reports running, the queue might be locked up for an
> extended period. Some other users then try to submit a small job that
> will kick out in 10 seconds or less and they get hung waiting for the
> queue to open up.
> I currently have my batch queue to allow for 2 jobs. I would prefer to
> have 4 queues each allowing for 1 job. The question is how do I get the
> submissions (this is transparent to the users) for a batch job to go to
> a queue other than the standard batch? All of my users batch requests
> go immediately to this queue and not to a queue for each user.
> Thanks for your help!
So at minimum, I'd drop the 'c' line in your queuedefs.
And, I may be all wet.
S.M. Fabac & Associates