ntpdate.c unsafe buffer write - NTP

This is a discussion on ntpdate.c unsafe buffer write - NTP ; >>> In article , David Woolley writes: David> Harlan Stenn wrote: >>>>> In article , David Woolley >>>>> writes: >> David> I'm not convinced that SNTP will displace ntpdate for this purpose. >> Why not? David> Because ntpdate is fixed ...

+ Reply to Thread
Page 2 of 5 FirstFirst 1 2 3 4 ... LastLast
Results 21 to 40 of 84

Thread: ntpdate.c unsafe buffer write

  1. Re: ntpdate.c unsafe buffer write

    >>> In article <47ae150e$0$515$5a6aecb4@news.aaisp.net.uk>, David Woolley writes:

    David> Harlan Stenn wrote:
    >>>>> In article <47ad8971$0$514$5a6aecb4@news.aaisp.net.uk>, David Woolley
    >>>>> writes:

    >>


    David> I'm not convinced that SNTP will displace ntpdate for this purpose.
    >> Why not?


    David> Because ntpdate is fixed in the popular culture and, for the ordinary
    David> user, SNTP doesn't offer any obvious advantages.

    Well, The Plan is to remove ntpdate. So unless somebody writes a
    contributed script, the fact that ntpdate (with its known bugs) is going
    away and a documented set of functional equivalents will be available will
    probably be all the convincing that is needed.

    >> If you want to get the time set *now* and then start, regardless of how
    >> well the system can maintain that time, we can do that
    >> (sntp/ntpdate+ntpd).


    David> Not in Dave Mills future of ntpd, as you don't get ntpdate or SNTP.

    That would be true if Dave controlled the contents of the distribution.

    There is a set of required functionality out there that will be met by the
    distribution I control. There may be distributions I "roll" that have
    subset functionality, and Dave may choose to offer other distributions.

    I see no benefit and many problems in "forcing" this issue too soon, so at
    the moment it is a topic for discussion and the situation seems to be on
    track right now.

    This is, by no means, the most important thing we're all working on right
    now.

    Getting the sntp code up to spec is far more important, IMO.

    >> If you want to set the time ASAP and have stable system time before
    >> starting your apps, in the usual case you are talking about 11 seconds
    >> for this to happen (ntpd -g, with iburst, early in the boot sequence,
    >> using ntp-wait later in the boot sequence, just before starting
    >> time-critical services).


    David> I suspect that only sets the time to the nearest 128ms, unless it
    David> does something that ntpd doesn't normally do.

    I suspect you are mistaken, and what I describe is correct.

    In the case I describe, at the end of that O(11 second) period the clock is
    Real Close (ie, the offset is "low enough"), the frequency drift is known
    and compensated for, and ntpd is in "state 4".
    --
    Harlan Stenn
    http://ntpforum.isc.org - be a member!

  2. Re: ntpdate.c unsafe buffer write

    Harlan Stenn wrote:

    > In the case I describe, at the end of that O(11 second) period the clock is
    > Real Close (ie, the offset is "low enough"), the frequency drift is known
    > and compensated for, and ntpd is in "state 4".


    As I read 4.2.4p4, for a warm start, the clock is within 128ms on exit.
    However, if it wasn't stepped, because it was already within 128ms, it
    will be slewing at maximum rate. Allowing 100ppm for motherboard
    tolerances, that means that it can take up to a further 320 seconds to
    reach the low milliseconds. I don't believe it would be safe to start
    ntpd in normal mode within that period.

    For a cold start, it won't reach state 4 for a further 900 seconds after
    first priming the clock filter.

    The 128ms is the tinkerable clock_max value, so it would be possible to
    configure for a tighter tolerance, but that probably means using
    different config files for start and run. The 900 is also the
    tinkerable clock_minstep. Note that setting clock_max to zero really
    sets it to infinity.

    Tinkering comes with the comment:

    Special tinker variables for Ulrich Windl. Very dangerous.

    The best advice for someone trying to simulate ntpdate -b is probably to
    make sure that the clock is wrong by at least 128ms before starting.
    That way, you should get a step, even with the default configuration.

  3. Re: ntpdate.c unsafe buffer write

    >>> In article <47aed826$0$509$5a6aecb4@news.aaisp.net.uk>, David Woolley writes:

    David> Harlan Stenn wrote:
    >> In the case I describe, at the end of that O(11 second) period the clock
    >> is Real Close (ie, the offset is "low enough"), the frequency drift is
    >> known and compensated for, and ntpd is in "state 4".


    David> As I read 4.2.4p4, for a warm start, the clock is within 128ms on
    David> exit. However, if it wasn't stepped, because it was already within
    David> 128ms, it will be slewing at maximum rate. Allowing 100ppm for
    David> motherboard tolerances, that means that it can take up to a further
    David> 320 seconds to reach the low milliseconds. I don't believe it would
    David> be safe to start ntpd in normal mode within that period.

    Why would ntpd be exiting during a warm start?

    For the case I'm describing the startup script sequence is to fire up 'ntpd
    -g' early. If there are applications that need the system clock to be
    on-track stable (even if a wiggle is being dealt with), that's 'state 4',
    and running 'ntp-wait' before starting those services is, to the best of my
    knowledge, all that is required.

    I have not seen a situation that requires 'ntpd -q', and I am not talking
    about it here, either.

    David> For a cold start, it won't reach state 4 for a further 900 seconds
    David> after first priming the clock filter.

    If the system has a good drift file, I disagree with you.

    David> The 128ms is the tinkerable clock_max value, so it would be possible
    David> to configure for a tighter tolerance, but that probably means using
    David> different config files for start and run.

    And what is the big deal with using different config files? The config file
    mechanism has "include" capability so it is trivial to to easily maintain
    common 'base' configuration with customizations for separate start/run
    phases.

    But the bigger problem is why are you insisting on separate start/run
    phases? This has not been "best practice" for quite a while, and if you
    insist on using this method you will be running in to the exact problems you
    are describing.

    David> The 900 is also the
    David> tinkerable clock_minstep. Note that setting clock_max to zero really
    David> sets it to infinity.

    David> Tinkering comes with the comment:

    David> Special tinker variables for Ulrich Windl. Very dangerous.

    David> The best advice for someone trying to simulate ntpdate -b is probably
    David> to make sure that the clock is wrong by at least 128ms before
    David> starting. That way, you should get a step, even with the default
    David> configuration.

    No, the best advice is to understand why you have been using ntpdate -b so
    far and understand the pros/cons of the new choices.

    You seem to be going for a "locally optimal" solution here and the 2nd order
    effects of this locally optimal solution are biting you.

    Get a bit more altitude and I believe you will find a "more optimal" solution.

    --
    Harlan Stenn
    http://ntpforum.isc.org - be a member!

  4. Re: ntpdate.c unsafe buffer write

    Harlan Stenn wrote:
    >
    > Why would ntpd be exiting during a warm start?


    Because we are discussing using it with the -q option. If you just use
    -g, it will take a lot longer to converge within a few milliseconds, as
    it will not slew at the maximum rate. If you use -q, you need to force
    a step if you want fast convergence.

    >
    > For the case I'm describing the startup script sequence is to fire up 'ntpd
    > -g' early. If there are applications that need the system clock to be
    > on-track stable (even if a wiggle is being dealt with), that's 'state 4',
    > and running 'ntp-wait' before starting those services is, to the best of my
    > knowledge, all that is required.


    State 4 means within 128ms and using the normal control loop, which has
    a time constant of around an hour.

    > David> For a cold start, it won't reach state 4 for a further 900 seconds
    > David> after first priming the clock filter.
    >
    > If the system has a good drift file, I disagree with you.


    The definition of cold start is that there is no drift file.

    > And what is the big deal with using different config files? The config file
    > mechanism has "include" capability so it is trivial to to easily maintain
    > common 'base' configuration with customizations for separate start/run
    > phases.


    You are now talking about using -q. The difficulty is that people have
    enough trouble getting the run phase config file right.

    >
    > But the bigger problem is why are you insisting on separate start/run
    > phases? This has not been "best practice" for quite a while, and if you
    > insist on using this method you will be running in to the exact problems you
    > are describing.


    > No, the best advice is to understand why you have been using ntpdate -b so
    > far and understand the pros/cons of the new choices.


    We are talking about system managers and package creators, neither of
    which have much time to study the details.

  5. Re: ntpdate.c unsafe buffer write

    >>> In article <47af716b$0$513$5a6aecb4@news.aaisp.net.uk>, David Woolley writes:

    David> Harlan Stenn wrote:
    >> Why would ntpd be exiting during a warm start?


    David> Because we are discussing using it with the -q option. If you just
    David> use -g, it will take a lot longer to converge within a few
    David> milliseconds, as it will not slew at the maximum rate. If you use
    David> -q, you need to force a step if you want fast convergence.

    I still maintain you are barking up the wrong tree.

    In terms of the behavior model of ntp, "state 4" is as good as it gets. You
    are in the right ballpark.

    If you want something else, something you consider "better" than state 4,
    please make a case for this and lobby for it.

    >> For the case I'm describing the startup script sequence is to fire up
    >> 'ntpd -g' early. If there are applications that need the system clock to
    >> be on-track stable (even if a wiggle is being dealt with), that's 'state
    >> 4', and running 'ntp-wait' before starting those services is, to the best
    >> of my knowledge, all that is required.


    David> State 4 means within 128ms and using the normal control loop, which
    David> has a time constant of around an hour.

    OK, and so what?

    Is State 4 insufficient for your needs, or are you just splitting hairs?

    David> For a cold start, it won't reach state 4 for a further 900 seconds
    David> after first priming the clock filter.

    >> If the system has a good drift file, I disagree with you.


    David> The definition of cold start is that there is no drift file.

    OK, now I know what the definitions are.

    I don't recall offhand the expected time to hit state 4 without a drift
    file.

    1) This should not be the ordinary case
    2) How does this have any bearing on the ntpdate -b discussion?

    >> And what is the big deal with using different config files? The config
    >> file mechanism has "include" capability so it is trivial to to easily
    >> maintain common 'base' configuration with customizations for separate
    >> start/run phases.


    David> You are now talking about using -q. The difficulty is that people
    David> have enough trouble getting the run phase config file right.

    I mention it because it's what you seem to be insisting on talking about.

    I was providing a way to address the problems you describe with the (IMO
    bad) mechanism (-q) under discussion.

    >> But the bigger problem is why are you insisting on separate start/run
    >> phases? This has not been "best practice" for quite a while, and if you
    >> insist on using this method you will be running in to the exact problems
    >> you are describing.


    >> No, the best advice is to understand why you have been using ntpdate -b
    >> so far and understand the pros/cons of the new choices.


    David> We are talking about system managers and package creators, neither of
    David> which have much time to study the details.

    Blessed are those who get what they deserve.

    These are the same folks who must get ssh configurations and various other
    network configurations working.

    If the stock things work well enough for folks, great.

    If folks have suggestions for improvements I welcome them.

    If folks want something different I invite them to make a case for it.
    Please remember the scope and complexity of the problem case. It's much
    easier to have a simpler solution if one is prepared to ignore certain
    problems. Another case in this point is Maildir.

    If somebody is in the situation where they know they have specific
    requirements for time, they are in the situation where they have enough
    altitude on their requirements to know the costs/benefits of what is
    involved in getting there.

    --
    Harlan Stenn
    http://ntpforum.isc.org - be a member!

  6. Re: ntpdate.c unsafe buffer write

    Hello David,

    On Sunday, February 10, 2008 at 10:55:29 +0000, David Woolley wrote:

    > However, if it wasn't stepped, because it was already within 128ms, it
    > will be slewing at maximum rate. Allowing 100ppm for motherboard
    > tolerances, that means that it can take up to a further 320 seconds to
    > reach the low milliseconds.


    Only 256 seconds maximum, because the kind of slew (singleshot)
    initiated by ntpd -q comes *above* the usual frequency correction
    already annihiliating the motherboard error.


    > I don't believe it would be safe to start ntpd in normal mode within
    > that period.


    Indeed: the daemon then behaves strangely, not sane at all. Last year
    I published here an awk script calling ntpd -gq and then sleeping
    until an eventual slew is finished. After that, normal mode ntpd can be
    started safely. And of course the daemon really appreciates to startup
    with a near-zero initial phase offset.


    Serge.
    --
    Serge point Bets arobase laposte point net

  7. Re: ntpdate.c unsafe buffer write

    >>> In article , Serge Bets writes:

    David> I don't believe it would be safe to start ntpd in normal mode within
    David> that period.

    Serge> Indeed: the daemon then behaves strangely, not sane at all. Last year
    Serge> I published here an awk script calling ntpd -gq and then sleeping
    Serge> until an eventual slew is finished. After that, normal mode ntpd can
    Serge> be started safely. And of course the daemon really appreciates to
    Serge> startup with a near-zero initial phase offset.

    1) what are you trying to accomplish by the sequence:

    ntpd -gq ; wait a bit; ntpd

    that you do not get with:

    ntpd -g ; ntp-wait

    2) there have been recent changes to the initial frequency/offset situation
    with ntp-dev. Have you tried the latest code to see how it behaves?

    --
    Harlan Stenn
    http://ntpforum.isc.org - be a member!

  8. Re: ntpdate.c unsafe buffer write

    Serge Bets wrote:

    >
    > Only 256 seconds maximum, because the kind of slew (singleshot)
    > initiated by ntpd -q comes *above* the usual frequency correction
    > already annihiliating the motherboard error.


    That assumes the use of the kernel time discipline, alhtough if you
    don't have that, it is even more important to use ntpdate -b, if you
    want fast phase convergence, as the time won't drift much between the
    initial set and the start of ntpd.

  9. Re: ntpdate.c unsafe buffer write

    Harlan Stenn writes:

    >>>> In article <47af716b$0$513$5a6aecb4@news.aaisp.net.uk>, David Woolley writes:


    >David> Harlan Stenn wrote:
    >>> Why would ntpd be exiting during a warm start?


    >David> Because we are discussing using it with the -q option. If you just
    >David> use -g, it will take a lot longer to converge within a few
    >David> milliseconds, as it will not slew at the maximum rate. If you use
    >David> -q, you need to force a step if you want fast convergence.


    >I still maintain you are barking up the wrong tree.


    >In terms of the behavior model of ntp, "state 4" is as good as it gets. You
    >are in the right ballpark.


    And as has been commented on numerous times, ntp is state 4 is very slow to
    converge to the best possible time control. This was a deliberate design
    decision, as I understand it, so that in steady state the time is averaged
    over a large number of samples ( not helped by the fact that 85% of samples
    are thrown away), to reduce the statistical error in the clock control.
    Note that at poll 7 the number of actual samples averaged over in the time
    scale of the ntp feedback loop is only about 3, so the statistical
    averaging even with such a long time constant, is not very good.


    >If you want something else, something you consider "better" than state 4,
    >please make a case for this and lobby for it.


    I think many people have lobbied for faster response. In the discussion of
    the chrony/ntp comparison, chrony is much faster to correct errors, and at
    least on a local network, better at disciplining the clock as well ( in
    part I think because on such a minimal round trip network, the frequency
    fluctuations dominate over the offset measurement errors-- Ie, the Allen
    intercept is much lower than the assumed 1500 sec. in that kind of
    situation-- also the drift model on real systems is not well modeled by 1/f
    noise.) So, what I think the point is that using ntpdate, one can rapidly
    bring the clock into a few msec of the correct time, rather than waiting
    for the feedback loop to finally eliminate that last 128msec of offset.

    >>> For the case I'm describing the startup script sequence is to fire up
    >>> 'ntpd -g' early. If there are applications that need the system clock to
    >>> be on-track stable (even if a wiggle is being dealt with), that's 'state
    >>> 4', and running 'ntp-wait' before starting those services is, to the best
    >>> of my knowledge, all that is required.


    >David> State 4 means within 128ms and using the normal control loop, which
    >David> has a time constant of around an hour.


    >OK, and so what?


    >Is State 4 insufficient for your needs, or are you just splitting hairs?


    >David> For a cold start, it won't reach state 4 for a further 900 seconds
    >David> after first priming the clock filter.


    >>> If the system has a good drift file, I disagree with you.


    >David> The definition of cold start is that there is no drift file.


    >OK, now I know what the definitions are.


    >I don't recall offhand the expected time to hit state 4 without a drift
    >file.


    >1) This should not be the ordinary case
    >2) How does this have any bearing on the ntpdate -b discussion?


    >>> And what is the big deal with using different config files? The config
    >>> file mechanism has "include" capability so it is trivial to to easily
    >>> maintain common 'base' configuration with customizations for separate
    >>> start/run phases.


    >David> You are now talking about using -q. The difficulty is that people
    >David> have enough trouble getting the run phase config file right.


    >I mention it because it's what you seem to be insisting on talking about.


    >I was providing a way to address the problems you describe with the (IMO
    >bad) mechanism (-q) under discussion.


    >>> But the bigger problem is why are you insisting on separate start/run
    >>> phases? This has not been "best practice" for quite a while, and if you
    >>> insist on using this method you will be running in to the exact problems
    >>> you are describing.


    >>> No, the best advice is to understand why you have been using ntpdate -b
    >>> so far and understand the pros/cons of the new choices.


    >David> We are talking about system managers and package creators, neither of
    >David> which have much time to study the details.


    >Blessed are those who get what they deserve.


    >These are the same folks who must get ssh configurations and various other
    >network configurations working.


    >If the stock things work well enough for folks, great.


    >If folks have suggestions for improvements I welcome them.


    >If folks want something different I invite them to make a case for it.
    >Please remember the scope and complexity of the problem case. It's much
    >easier to have a simpler solution if one is prepared to ignore certain
    >problems. Another case in this point is Maildir.


    >If somebody is in the situation where they know they have specific
    >requirements for time, they are in the situation where they have enough
    >altitude on their requirements to know the costs/benefits of what is
    >involved in getting there.


    Well, I disagree. The sign of a good piece of software is that it does what
    it needs to do despite the user having a bad idea of how to accomplish the
    task. The use of software should not be an essotaric exercise. Let me again
    bring up chrony. It manages to get the system into msec of the right time
    on a timescale of minutes, not hours. It had a very different model for the
    clock control mechanism from ntp. From what I have seen now, both in a
    local net system ( with .2ms roundtrip times) and an adsl connection (20ms
    round trip times) chrony also does as good a job or better than ntp at
    disciplining the clock. I have just ordered another Garmin 18LVC so I can
    make measurments as to how well chrony and ntp actually discipline the adsl
    system's time to true time despite all of the noise that adsl adds to the
    measurement process. (both ntp and chrony seem to have about the same
    standard deviation in the measured offset, so that gives no information as
    to how well the clock is actually disciplined-- one could discipline it to
    5usec and the other to 100usec and you could not tell the difference from
    the measured times which have a variance of 500usec due to round trip
    problems).





    >--
    >Harlan Stenn
    >http://ntpforum.isc.org - be a member!


  10. Re: ntpdate.c unsafe buffer write

    On Monday, February 11, 2008 at 7:38:53 +0000, David Woolley wrote:

    > Serge Bets wrote:
    >> the kind of slew (singleshot) initiated by ntpd -q comes *above* the
    >> usual frequency correction

    > That assumes the use of the kernel time discipline


    Indeed: I sometimes forget this can lack or be disabled, sorry.


    Serge.
    --
    Serge point Bets arobase laposte point net

  11. Re: ntpdate.c unsafe buffer write

    Hello Harlan,

    On Monday, February 11, 2008 at 0:33:36 +0000, Harlan Stenn wrote:

    > 1) what are you trying to accomplish by the sequence:
    >
    > ntpd -gq ; wait a bit; ntpd
    >
    > that you do not get with:
    >
    > ntpd -g ; ntp-wait


    Let's compare. I used a some weeks old ntp-dev 4.2.5p95, because the
    latest p113 seems to behave strangely (clearing STA_UNSYNC long before
    the clock is really synced). The driftfile exists and has a correct
    value. ntp.conf declares one reachable LAN server with iburst. There are
    4 main cases: initial phase offset bigger than 128 ms, or below, and
    your startup method, or my method.

    -1) Initial phase offset over 128 ms, ntp-wait method:

    | 0:00 # ntpd -g; ntp-wait; time_critical_apps
    | 0:07 time step ==> the clock is very near 0 offset (less than a ms),
    | stratum 16, refid .STEP., state 4
    | 0:12 ntp-wait terminates ==> time critical apps can be started
    | 1:20 *synchronized, stratum x ==> ntpd starts serving good time

    Timings are in minutes:seconds, relative to startup. Note this last
    *sync stage, when ntpd takes a non-16 stratum, comes at a seemingly
    random moment, sometimes as early as 0:40.


    -2) Initial phase offset over 128 ms, my slew_sleeping script:

    | 0:00 # ntpd -gq | slew_sleeping; ntpd
    | 0:07 time step, no sleep ==> near 0 offset (time critical apps can be
    | started)
    | 0:14 *synchronized ==> ntpd starts serving good time


    -3) Initial phase offset below 128 ms, ntp-wait method (worst case):

    | 0:00 # ntpd -g; ntp-wait; time_critical_apps
    | 0:07 *synchronized ==> ntpd starts serving time, a still "bad" time,
    | because the 128 ms offset is not yet slewed
    | 0:12 ntp-wait terminates ==> time critical apps are started
    | 7:30 offset crosses the zero line for the first time, and begins an
    | excursion on the other side (up to maybe 40 ms). The initial good
    | frequency has been modified to slew the phase offset, and is now
    | wildly bad (by perhaps 50 or 70 ppm). The chaos begins, and will
    | stabilize some hours later.


    -4) Initial phase offset below 128 ms, slew_sleeping script:

    | 0:00 ntpd -gq | slew_sleeping; ntpd
    | 0:07 begin max rate slew, sleeping all the necessary time (max 256
    | seconds)
    | 4:23 wake-up ==> near 0 offset, time critical apps can be started
    | 4:30 *synchronized ==> ntpd starts serving good time


    Summary: The ntp-wait method is good at protecting apps against steps,
    but not against "large" offsets (tens or a hundred of ms). The daemon
    itself can start serving such less-than-good time. Startup takes more
    time to reach a near 0 offset, and can wreck the frequency.

    The ntpd -gq method does also avoid steps to applications, if all works
    well. But it's not a 100% protection, not the goal. It also protects
    apps against large offsets, never serves bad time, and never squashes
    the driftfile. It makes a much saner daemon startup, more stable, where
    the "chaos" situation described above (case #3) doesn't happen. It
    startups faster, outside of the cases where ntp-wait cheats by
    tolerating not yet good offsets.


    If necessary, slew_sleeping and ntp-wait can be combined, for a better
    level of protection. What about the following, that should survive even
    a server temporarily unavailable during startup, further delaying time
    critical apps:

    | # ntpd -gq | slew_sleeping; ntpd -g; ntp-wait; time_critical_apps

    One could also imagine looping ntpd -gq until it works, then sleep, then
    ntpd and time_critical_apps (the slew_sleeping scripts has to be
    modified to return success code):

    | # while ntpd -gq | slew_sleeping; do :; done; ntpd; time_critical_apps


    Serge.
    --
    Serge point Bets arobase laposte point net

  12. Re: ntpdate.c unsafe buffer write

    I've tried to keep quiet and bite my tongue at this whole ntp vs chrony
    thing... But something has been nagging me in the back of my head that i
    juat have to know the answer to...

    How are you measuring your results? From what I've skimmed over you are
    simply using each program's own generated statistics... Wouldn't a more
    correct way be to use an external (and calibrated) device to measure /
    compare to ensure the results are actually valid? Otherwise you are in
    essence comparing apples to oranges...

  13. Re: ntpdate.c unsafe buffer write

    jason@extremeoverclocking.com (Jason Rabel) writes:

    >I've tried to keep quiet and bite my tongue at this whole ntp vs chrony
    >thing... But something has been nagging me in the back of my head that i
    >juat have to know the answer to...


    >How are you measuring your results? From what I've skimmed over you are
    >simply using each program's own generated statistics... Wouldn't a more
    >correct way be to use an external (and calibrated) device to measure /
    >compare to ensure the results are actually valid? Otherwise you are in
    >essence comparing apples to oranges...


    Well, no. Yes, I do use each programs statistics, but they are the raw
    statistics of the offset (t1+t4-t2-t3)/2 and the round trip (t2-t1+t4-t3)
    from the ntp packets. There is no processing that has gone on in eitehr
    system in reporting those numbers. In fact in ntp I hacked the
    report_peerstats function to report the raw numbers, not the ones that have
    gone through the clock filter.
    If the statistics of those are the same for the two, one cannot say
    anything. If one is worse than the other however ( which is what I find for
    ntp on the internal network) then the one that is worse is a worse clock.

    Yes, ideally one would also have an independent external time source which
    one uses to see how well either system disciplines the clock to true utc,
    and I am getting one to look at the statistics for another system I have,
    which is connected to my server ( A GPS PPS driven computer running ntp and
    getting a variance of about 2-3usec) via an ADSL line from home ( 20msec
    round trip typical and about .5ms variance for both programs for the "raw"
    offset distribution. Well, not quite raw, but with roundtrips >2minimum
    removed from the statistics). The two programs are very similar, so I
    cannot say which disciplines the clock better. Eg, ntp might discipline it
    to within 5usec of UTC while chrony only does it to within 100usec and I
    could not tell by looking at the variance, since I do not believe that the
    variance is accurate to 10%.

    So, no, I am comparing apples to apples ( the offsets as determined from
    the ntp packet exchange mechanism which both use and both report).


  14. Re: ntpdate.c unsafe buffer write

    Guys,

    There seems to some misinformation here.

    Both ntpdate and ntpd -q set the offset with adjtime() and then exit.
    After that, stock Unix adjtime() slews the clock at rate 500 PPM, which
    indeed could take 256 s for an initial offset of 128 ms. A prudent
    response would be to measure the initial offset and compute the time to
    wait. The ntp-wait script waits for ntpd to enter state 4, which could
    happen with an initial offset as high as 128 ms.

    The ntpd time constant is purposely set somewhat large at 2000 s, which
    results in a risetime of about 3000 s. This is a compromise for stable
    acquisition for herky-jerky Internet paths and speed of convergence for
    LANs. For typical Internet paths the Allan intercept is about 2000 s.
    For fast LANs with nanosecond clock resolution, the Allan intercept can
    be as low as 250s, which is what the kernel PPS loop is designed for.

    Both the daemon and kernel loops are engineered so that the time
    constant is directly proportional to the poll interval and the risetime
    scales directly. If the poll exponent is set to the minimum 4 (16 s) the
    risetinme is 500 s. While not admitted in public, the latest snapshot
    can set the poll interval to 3 (8 s), so the risetime is 250 s. This
    works just fine on a LAN, but I would never do this on an outside circuit.

    Dave

    Unruh wrote:
    > Harlan Stenn writes:
    >
    >
    >>>>>In article <47af716b$0$513$5a6aecb4@news.aaisp.net.uk>, David Woolley writes:

    >
    >
    >>David> Harlan Stenn wrote:
    >>
    >>>>Why would ntpd be exiting during a warm start?

    >
    >
    >>David> Because we are discussing using it with the -q option. If you just
    >>David> use -g, it will take a lot longer to converge within a few
    >>David> milliseconds, as it will not slew at the maximum rate. If you use
    >>David> -q, you need to force a step if you want fast convergence.

    >
    >
    >>I still maintain you are barking up the wrong tree.

    >
    >
    >>In terms of the behavior model of ntp, "state 4" is as good as it gets. You
    >>are in the right ballpark.

    >
    >
    > And as has been commented on numerous times, ntp is state 4 is very slow to
    > converge to the best possible time control. This was a deliberate design
    > decision, as I understand it, so that in steady state the time is averaged
    > over a large number of samples ( not helped by the fact that 85% of samples
    > are thrown away), to reduce the statistical error in the clock control.
    > Note that at poll 7 the number of actual samples averaged over in the time
    > scale of the ntp feedback loop is only about 3, so the statistical
    > averaging even with such a long time constant, is not very good.
    >
    >
    >
    >>If you want something else, something you consider "better" than state 4,
    >>please make a case for this and lobby for it.

    >
    >
    > I think many people have lobbied for faster response. In the discussion of
    > the chrony/ntp comparison, chrony is much faster to correct errors, and at
    > least on a local network, better at disciplining the clock as well ( in
    > part I think because on such a minimal round trip network, the frequency
    > fluctuations dominate over the offset measurement errors-- Ie, the Allen
    > intercept is much lower than the assumed 1500 sec. in that kind of
    > situation-- also the drift model on real systems is not well modeled by 1/f
    > noise.) So, what I think the point is that using ntpdate, one can rapidly
    > bring the clock into a few msec of the correct time, rather than waiting
    > for the feedback loop to finally eliminate that last 128msec of offset.
    >
    >
    >>>>For the case I'm describing the startup script sequence is to fire up
    >>>>'ntpd -g' early. If there are applications that need the system clock to
    >>>>be on-track stable (even if a wiggle is being dealt with), that's 'state
    >>>>4', and running 'ntp-wait' before starting those services is, to the best
    >>>>of my knowledge, all that is required.

    >
    >
    >>David> State 4 means within 128ms and using the normal control loop, which
    >>David> has a time constant of around an hour.

    >
    >
    >>OK, and so what?

    >
    >
    >>Is State 4 insufficient for your needs, or are you just splitting hairs?

    >
    >
    >>David> For a cold start, it won't reach state 4 for a further 900 seconds
    >>David> after first priming the clock filter.

    >
    >
    >>>>If the system has a good drift file, I disagree with you.

    >
    >
    >>David> The definition of cold start is that there is no drift file.

    >
    >
    >>OK, now I know what the definitions are.

    >
    >
    >>I don't recall offhand the expected time to hit state 4 without a drift
    >>file.

    >
    >
    >>1) This should not be the ordinary case
    >>2) How does this have any bearing on the ntpdate -b discussion?

    >
    >
    >>>>And what is the big deal with using different config files? The config
    >>>>file mechanism has "include" capability so it is trivial to to easily
    >>>>maintain common 'base' configuration with customizations for separate
    >>>>start/run phases.

    >
    >
    >>David> You are now talking about using -q. The difficulty is that people
    >>David> have enough trouble getting the run phase config file right.

    >
    >
    >>I mention it because it's what you seem to be insisting on talking about.

    >
    >
    >>I was providing a way to address the problems you describe with the (IMO
    >>bad) mechanism (-q) under discussion.

    >
    >
    >>>>But the bigger problem is why are you insisting on separate start/run
    >>>>phases? This has not been "best practice" for quite a while, and if you
    >>>>insist on using this method you will be running in to the exact problems
    >>>>you are describing.

    >
    >
    >>>>No, the best advice is to understand why you have been using ntpdate -b
    >>>>so far and understand the pros/cons of the new choices.

    >
    >
    >>David> We are talking about system managers and package creators, neither of
    >>David> which have much time to study the details.

    >
    >
    >>Blessed are those who get what they deserve.

    >
    >
    >>These are the same folks who must get ssh configurations and various other
    >>network configurations working.

    >
    >
    >>If the stock things work well enough for folks, great.

    >
    >
    >>If folks have suggestions for improvements I welcome them.

    >
    >
    >>If folks want something different I invite them to make a case for it.
    >>Please remember the scope and complexity of the problem case. It's much
    >>easier to have a simpler solution if one is prepared to ignore certain
    >>problems. Another case in this point is Maildir.

    >
    >
    >>If somebody is in the situation where they know they have specific
    >>requirements for time, they are in the situation where they have enough
    >>altitude on their requirements to know the costs/benefits of what is
    >>involved in getting there.

    >
    >
    > Well, I disagree. The sign of a good piece of software is that it does what
    > it needs to do despite the user having a bad idea of how to accomplish the
    > task. The use of software should not be an essotaric exercise. Let me again
    > bring up chrony. It manages to get the system into msec of the right time
    > on a timescale of minutes, not hours. It had a very different model for the
    > clock control mechanism from ntp. From what I have seen now, both in a
    > local net system ( with .2ms roundtrip times) and an adsl connection (20ms
    > round trip times) chrony also does as good a job or better than ntp at
    > disciplining the clock. I have just ordered another Garmin 18LVC so I can
    > make measurments as to how well chrony and ntp actually discipline the adsl
    > system's time to true time despite all of the noise that adsl adds to the
    > measurement process. (both ntp and chrony seem to have about the same
    > standard deviation in the measured offset, so that gives no information as
    > to how well the clock is actually disciplined-- one could discipline it to
    > 5usec and the other to 100usec and you could not tell the difference from
    > the measured times which have a variance of 500usec due to round trip
    > problems).
    >
    >
    >
    >
    >
    >
    >>--
    >>Harlan Stenn
    >>http://ntpforum.isc.org - be a member!


  15. Re: ntpdate.c unsafe buffer write

    "ntdate -b" steps the clock. That's the function under discussion.
    The one that's used nearly universally in boot sequences.

    -Tom

    David L. Mills wrote:
    > Guys,
    >
    > There seems to some misinformation here.
    >
    > Both ntpdate and ntpd -q set the offset with adjtime() and then exit.
    > After that, stock Unix adjtime() slews the clock at rate 500 PPM, which
    > indeed could take 256 s for an initial offset of 128 ms. A prudent
    > response would be to measure the initial offset and compute the time to
    > wait. The ntp-wait script waits for ntpd to enter state 4, which could
    > happen with an initial offset as high as 128 ms.
    >
    > The ntpd time constant is purposely set somewhat large at 2000 s, which
    > results in a risetime of about 3000 s. This is a compromise for stable
    > acquisition for herky-jerky Internet paths and speed of convergence for
    > LANs. For typical Internet paths the Allan intercept is about 2000 s.
    > For fast LANs with nanosecond clock resolution, the Allan intercept can
    > be as low as 250s, which is what the kernel PPS loop is designed for.
    >
    > Both the daemon and kernel loops are engineered so that the time
    > constant is directly proportional to the poll interval and the risetime
    > scales directly. If the poll exponent is set to the minimum 4 (16 s) the
    > risetinme is 500 s. While not admitted in public, the latest snapshot
    > can set the poll interval to 3 (8 s), so the risetime is 250 s. This
    > works just fine on a LAN, but I would never do this on an outside circuit.
    >
    > Dave
    >
    > Unruh wrote:
    >> Harlan Stenn writes:
    >>
    >>
    >>>>>> In article <47af716b$0$513$5a6aecb4@news.aaisp.net.uk>, David
    >>>>>> Woolley writes:

    >>
    >>
    >>> David> Harlan Stenn wrote:
    >>>
    >>>>> Why would ntpd be exiting during a warm start?

    >>
    >>
    >>> David> Because we are discussing using it with the -q option. If you
    >>> just
    >>> David> use -g, it will take a lot longer to converge within a few
    >>> David> milliseconds, as it will not slew at the maximum rate. If you
    >>> use
    >>> David> -q, you need to force a step if you want fast convergence.

    >>
    >>
    >>> I still maintain you are barking up the wrong tree.

    >>
    >>
    >>> In terms of the behavior model of ntp, "state 4" is as good as it
    >>> gets. You
    >>> are in the right ballpark.

    >>
    >>
    >> And as has been commented on numerous times, ntp is state 4 is very
    >> slow to
    >> converge to the best possible time control. This was a deliberate design
    >> decision, as I understand it, so that in steady state the time is
    >> averaged
    >> over a large number of samples ( not helped by the fact that 85% of
    >> samples
    >> are thrown away), to reduce the statistical error in the clock control.
    >> Note that at poll 7 the number of actual samples averaged over in the
    >> time
    >> scale of the ntp feedback loop is only about 3, so the statistical
    >> averaging even with such a long time constant, is not very good.
    >>
    >>
    >>
    >>> If you want something else, something you consider "better" than
    >>> state 4,
    >>> please make a case for this and lobby for it.

    >>
    >>
    >> I think many people have lobbied for faster response. In the
    >> discussion of
    >> the chrony/ntp comparison, chrony is much faster to correct errors,
    >> and at
    >> least on a local network, better at disciplining the clock as well ( in
    >> part I think because on such a minimal round trip network, the frequency
    >> fluctuations dominate over the offset measurement errors-- Ie, the Allen
    >> intercept is much lower than the assumed 1500 sec. in that kind of
    >> situation-- also the drift model on real systems is not well modeled
    >> by 1/f
    >> noise.) So, what I think the point is that using ntpdate, one can rapidly
    >> bring the clock into a few msec of the correct time, rather than waiting
    >> for the feedback loop to finally eliminate that last 128msec of offset.
    >>
    >>
    >>>>> For the case I'm describing the startup script sequence is to fire up
    >>>>> 'ntpd -g' early. If there are applications that need the system
    >>>>> clock to
    >>>>> be on-track stable (even if a wiggle is being dealt with), that's
    >>>>> 'state
    >>>>> 4', and running 'ntp-wait' before starting those services is, to
    >>>>> the best
    >>>>> of my knowledge, all that is required.

    >>
    >>
    >>> David> State 4 means within 128ms and using the normal control loop,
    >>> which
    >>> David> has a time constant of around an hour.

    >>
    >>
    >>> OK, and so what?

    >>
    >>
    >>> Is State 4 insufficient for your needs, or are you just splitting hairs?

    >>
    >>
    >>> David> For a cold start, it won't reach state 4 for a further 900
    >>> seconds
    >>> David> after first priming the clock filter.

    >>
    >>
    >>>>> If the system has a good drift file, I disagree with you.

    >>
    >>
    >>> David> The definition of cold start is that there is no drift file.

    >>
    >>
    >>> OK, now I know what the definitions are.

    >>
    >>
    >>> I don't recall offhand the expected time to hit state 4 without a drift
    >>> file.

    >>
    >>
    >>> 1) This should not be the ordinary case
    >>> 2) How does this have any bearing on the ntpdate -b discussion?

    >>
    >>
    >>>>> And what is the big deal with using different config files? The
    >>>>> config
    >>>>> file mechanism has "include" capability so it is trivial to to easily
    >>>>> maintain common 'base' configuration with customizations for separate
    >>>>> start/run phases.

    >>
    >>
    >>> David> You are now talking about using -q. The difficulty is that
    >>> people
    >>> David> have enough trouble getting the run phase config file right.

    >>
    >>
    >>> I mention it because it's what you seem to be insisting on talking
    >>> about.

    >>
    >>
    >>> I was providing a way to address the problems you describe with the (IMO
    >>> bad) mechanism (-q) under discussion.

    >>
    >>
    >>>>> But the bigger problem is why are you insisting on separate start/run
    >>>>> phases? This has not been "best practice" for quite a while, and
    >>>>> if you
    >>>>> insist on using this method you will be running in to the exact
    >>>>> problems
    >>>>> you are describing.

    >>
    >>
    >>>>> No, the best advice is to understand why you have been using
    >>>>> ntpdate -b
    >>>>> so far and understand the pros/cons of the new choices.

    >>
    >>
    >>> David> We are talking about system managers and package creators,
    >>> neither of
    >>> David> which have much time to study the details.

    >>
    >>
    >>> Blessed are those who get what they deserve.

    >>
    >>
    >>> These are the same folks who must get ssh configurations and various
    >>> other
    >>> network configurations working.

    >>
    >>
    >>> If the stock things work well enough for folks, great.

    >>
    >>
    >>> If folks have suggestions for improvements I welcome them.

    >>
    >>
    >>> If folks want something different I invite them to make a case for it.
    >>> Please remember the scope and complexity of the problem case. It's much
    >>> easier to have a simpler solution if one is prepared to ignore certain
    >>> problems. Another case in this point is Maildir.

    >>
    >>
    >>> If somebody is in the situation where they know they have specific
    >>> requirements for time, they are in the situation where they have enough
    >>> altitude on their requirements to know the costs/benefits of what is
    >>> involved in getting there.

    >>
    >>
    >> Well, I disagree. The sign of a good piece of software is that it does
    >> what
    >> it needs to do despite the user having a bad idea of how to accomplish
    >> the
    >> task. The use of software should not be an essotaric exercise. Let me
    >> again
    >> bring up chrony. It manages to get the system into msec of the right time
    >> on a timescale of minutes, not hours. It had a very different model
    >> for the
    >> clock control mechanism from ntp. From what I have seen now, both in a
    >> local net system ( with .2ms roundtrip times) and an adsl connection
    >> (20ms
    >> round trip times) chrony also does as good a job or better than ntp at
    >> disciplining the clock. I have just ordered another Garmin 18LVC so I can
    >> make measurments as to how well chrony and ntp actually discipline the
    >> adsl
    >> system's time to true time despite all of the noise that adsl adds to the
    >> measurement process. (both ntp and chrony seem to have about the same
    >> standard deviation in the measured offset, so that gives no
    >> information as
    >> to how well the clock is actually disciplined-- one could discipline
    >> it to
    >> 5usec and the other to 100usec and you could not tell the difference from
    >> the measured times which have a variance of 500usec due to round trip
    >> problems).
    >>
    >>
    >>
    >>
    >>
    >>
    >>> --
    >>> Harlan Stenn
    >>> http://ntpforum.isc.org - be a member!


  16. Re: ntpdate.c unsafe buffer write

    Hello David,

    On Monday, February 11, 2008 at 19:03:36 +0000, David L. Mills wrote:

    > Both ntpdate and ntpd -q set the offset with adjtime() and then exit.
    > After that, stock Unix adjtime() slews the clock at rate 500 PPM,
    > which indeed could take 256 s for an initial offset of 128 ms.


    And on some systems, adjtime() calls adjtimex(ADJ_OFFSET_SINGLESHOT) to
    do the job.

    Note that ntpdate does not stop slewing when it reaches the zero offset,
    but voluntarily overshoots by 50%. That's why ntpdate -b (forced step)
    or ntpd -q (exact slew until zero) are so much better.


    > A prudent response would be to measure the initial offset and compute
    > the time to wait.


    Thanks! That's exactly what does the slew_sleeping script:

    ------------------------------------------------------------------------
    #!/bin/sh

    function slew_sleeping() {
    awk '
    {print}
    /^ntpd: time slew .*s$/ {
    sleep = $4 * 2000
    if (sleep < 0)
    sleep = -sleep
    sleep = int(sleep + 0.999999) # rounded by excess
    success = 1
    }
    /^ntpd: time set .*s$/ {
    success = 1
    }
    END{
    if (sleep) {
    printf "wait for the end of time slew, sleeping %d seconds\n", sleep
    system("sleep " sleep)
    }
    exit success
    }
    '
    }

    # echo "ntpd: time slew -0.003000s" | slew_sleeping; exit

    while ntpd -gq | slew_sleeping; do :; done; ntpd
    ------------------------------------------------------------------------


    Serge.
    --
    Serge point Bets arobase laposte point net

  17. Re: ntpdate.c unsafe buffer write

    Guys,

    Just for clarity, neither the daemon nor kernel frequency is adjusted in
    any way with ntpd -q.

    Serge Bets wrote:

    > On Monday, February 11, 2008 at 7:38:53 +0000, David Woolley wrote:
    >
    >
    >>Serge Bets wrote:
    >>
    >>>the kind of slew (singleshot) initiated by ntpd -q comes *above* the
    >>>usual frequency correction

    >>
    >>That assumes the use of the kernel time discipline

    >
    >
    > Indeed: I sometimes forget this can lack or be disabled, sorry.
    >
    >
    > Serge.


  18. Re: ntpdate.c unsafe buffer write

    Serge,

    The behavior after a step is deliberate. The iburst volley after a step
    is delayed a random fraction of the poll interval to avoid implosion
    at a busy server. An additional delay may be enforced to avoid violating
    the headway restrictions. This is not to protect your applications; it
    is to protect the server.

    Dave

    Serge Bets wrote:

    > Hello Harlan,
    >
    > On Monday, February 11, 2008 at 0:33:36 +0000, Harlan Stenn wrote:
    >
    >
    >>1) what are you trying to accomplish by the sequence:
    >>
    >> ntpd -gq ; wait a bit; ntpd
    >>
    >>that you do not get with:
    >>
    >> ntpd -g ; ntp-wait

    >
    >
    > Let's compare. I used a some weeks old ntp-dev 4.2.5p95, because the
    > latest p113 seems to behave strangely (clearing STA_UNSYNC long before
    > the clock is really synced). The driftfile exists and has a correct
    > value. ntp.conf declares one reachable LAN server with iburst. There are
    > 4 main cases: initial phase offset bigger than 128 ms, or below, and
    > your startup method, or my method.
    >
    > -1) Initial phase offset over 128 ms, ntp-wait method:
    >
    > | 0:00 # ntpd -g; ntp-wait; time_critical_apps
    > | 0:07 time step ==> the clock is very near 0 offset (less than a ms),
    > | stratum 16, refid .STEP., state 4
    > | 0:12 ntp-wait terminates ==> time critical apps can be started
    > | 1:20 *synchronized, stratum x ==> ntpd starts serving good time
    >
    > Timings are in minutes:seconds, relative to startup. Note this last
    > *sync stage, when ntpd takes a non-16 stratum, comes at a seemingly
    > random moment, sometimes as early as 0:40.
    >
    >
    > -2) Initial phase offset over 128 ms, my slew_sleeping script:
    >
    > | 0:00 # ntpd -gq | slew_sleeping; ntpd
    > | 0:07 time step, no sleep ==> near 0 offset (time critical apps can be
    > | started)
    > | 0:14 *synchronized ==> ntpd starts serving good time
    >
    >
    > -3) Initial phase offset below 128 ms, ntp-wait method (worst case):
    >
    > | 0:00 # ntpd -g; ntp-wait; time_critical_apps
    > | 0:07 *synchronized ==> ntpd starts serving time, a still "bad" time,
    > | because the 128 ms offset is not yet slewed
    > | 0:12 ntp-wait terminates ==> time critical apps are started
    > | 7:30 offset crosses the zero line for the first time, and begins an
    > | excursion on the other side (up to maybe 40 ms). The initial good
    > | frequency has been modified to slew the phase offset, and is now
    > | wildly bad (by perhaps 50 or 70 ppm). The chaos begins, and will
    > | stabilize some hours later.
    >
    >
    > -4) Initial phase offset below 128 ms, slew_sleeping script:
    >
    > | 0:00 ntpd -gq | slew_sleeping; ntpd
    > | 0:07 begin max rate slew, sleeping all the necessary time (max 256
    > | seconds)
    > | 4:23 wake-up ==> near 0 offset, time critical apps can be started
    > | 4:30 *synchronized ==> ntpd starts serving good time
    >
    >
    > Summary: The ntp-wait method is good at protecting apps against steps,
    > but not against "large" offsets (tens or a hundred of ms). The daemon
    > itself can start serving such less-than-good time. Startup takes more
    > time to reach a near 0 offset, and can wreck the frequency.
    >
    > The ntpd -gq method does also avoid steps to applications, if all works
    > well. But it's not a 100% protection, not the goal. It also protects
    > apps against large offsets, never serves bad time, and never squashes
    > the driftfile. It makes a much saner daemon startup, more stable, where
    > the "chaos" situation described above (case #3) doesn't happen. It
    > startups faster, outside of the cases where ntp-wait cheats by
    > tolerating not yet good offsets.
    >
    >
    > If necessary, slew_sleeping and ntp-wait can be combined, for a better
    > level of protection. What about the following, that should survive even
    > a server temporarily unavailable during startup, further delaying time
    > critical apps:
    >
    > | # ntpd -gq | slew_sleeping; ntpd -g; ntp-wait; time_critical_apps
    >
    > One could also imagine looping ntpd -gq until it works, then sleep, then
    > ntpd and time_critical_apps (the slew_sleeping scripts has to be
    > modified to return success code):
    >
    > | # while ntpd -gq | slew_sleeping; do :; done; ntpd; time_critical_apps
    >
    >
    > Serge.


  19. Re: ntpdate.c unsafe buffer write

    Serge,

    Interesting script - thanks. Would you like me to put it in the
    distribution?

    This brings up an underlying question. It is possible for events to unfold
    in a way that while in state 4, events will be such that there will be
    future wiggles. Some of them may even take us out of state 4.

    Agreed?

    If so, what benefit do we get by using the script to delay things while we
    are waiting for a slew to finish while in state 4?

    What difference does it make if the system in question is an client as
    opposed to a server?

    --
    Harlan Stenn
    http://ntpforum.isc.org - be a member!

  20. Re: ntpdate.c unsafe buffer write

    Tom,

    With "tinker step .001" in the configuration file, ntpd -q will step the
    clock, unless the residual offset is less than .001 s. This is probably
    more complexity than you can stand. Just keep using ntpdate and be happy.

    Dave

    Tom Smith wrote:

    > "ntdate -b" steps the clock. That's the function under discussion.
    > The one that's used nearly universally in boot sequences.
    >
    > -Tom
    >
    > David L. Mills wrote:
    >
    >> Guys,
    >>
    >> There seems to some misinformation here.
    >>
    >> Both ntpdate and ntpd -q set the offset with adjtime() and then exit.
    >> After that, stock Unix adjtime() slews the clock at rate 500 PPM,
    >> which indeed could take 256 s for an initial offset of 128 ms. A
    >> prudent response would be to measure the initial offset and compute
    >> the time to wait. The ntp-wait script waits for ntpd to enter state 4,
    >> which could happen with an initial offset as high as 128 ms.
    >>
    >> The ntpd time constant is purposely set somewhat large at 2000 s,
    >> which results in a risetime of about 3000 s. This is a compromise for
    >> stable acquisition for herky-jerky Internet paths and speed of
    >> convergence for LANs. For typical Internet paths the Allan intercept
    >> is about 2000 s. For fast LANs with nanosecond clock resolution, the
    >> Allan intercept can be as low as 250s, which is what the kernel PPS
    >> loop is designed for.
    >>
    >> Both the daemon and kernel loops are engineered so that the time
    >> constant is directly proportional to the poll interval and the
    >> risetime scales directly. If the poll exponent is set to the minimum 4
    >> (16 s) the risetinme is 500 s. While not admitted in public, the
    >> latest snapshot can set the poll interval to 3 (8 s), so the risetime
    >> is 250 s. This works just fine on a LAN, but I would never do this on
    >> an outside circuit.
    >>
    >> Dave

    >snip


+ Reply to Thread
Page 2 of 5 FirstFirst 1 2 3 4 ... LastLast