Optimal config - NTP

This is a discussion on Optimal config - NTP ; Hello, time nerds. :-) Here's what I want: accurate time to at least a few ms of UTC. Don't think I have users that need better than that. I'd like the time to be continuous and not jump around, of ...

+ Reply to Thread
Results 1 to 6 of 6

Thread: Optimal config

  1. Optimal config

    Hello, time nerds. :-)

    Here's what I want: accurate time to at least a few ms of UTC. Don't
    think I have users that need better than that. I'd like the time to be
    continuous and not jump around, of course.


    Here's what I have: 3 GPS clocks (two old Datum TymServe 2100s and one
    unknown). Two are fairly close geographically and the third is about 2
    miles away.


    Currently, I have 4 systems (RHEL 3 on Dell HW) that are peered to each
    other and use all three GPS clocks as references. OK, looks like I have
    two others peered as well - similar to the others but RHEL 5.

    I also have another group of 4 servers peered together (RHEL 5 on Dell
    except one that is running an NPAD server on Knoppix) in a similar setup
    with the same three GPS clocks as well as two external Stratum 2 servers.

    iburst is on for all (saw that thread and will disable for externals!)

    On my own workstation, using 8 of the ten server mentioned above :-) I see
    messages like this:


    May 5 09:44:05 toto ntpd[5589]: synchronized to xxx.xxx.xxx.xxx, stratum 2

    where my client daemon seems to flip around from one of my servers to the
    others.

    I have real servers that use the first 4 I described above as their
    "server" entries and I see similar "synchronized" messages.


    What can or should I do to make this more robust? Would a "prefer"
    statement help? Should I use more or fewer peers?



    --
    Peter Laws / N5UWY
    National Weather Center / Network Operations Center
    University of Oklahoma Information Technology
    plaws@ou.edu
    -----------------------------------------------------------------------
    Feedback? Contact my director, Craig Cochell, craigc@ou.edu. Thank you!

    _______________________________________________
    questions mailing list
    questions@lists.ntp.org
    https://lists.ntp.org/mailman/listinfo/questions

    --- StripMime Report -- processed MIME parts ---
    multipart/mixed
    text/plain (text body -- kept)
    text/plain (text body -- kept)
    ---

  2. Re: Optimal config

    Peter Laws wrote:
    > Hello, time nerds. :-)
    >
    > Here's what I want: accurate time to at least a few ms of UTC. Don't
    > think I have users that need better than that. I'd like the time to be
    > continuous and not jump around, of course.
    >
    >
    > Here's what I have: 3 GPS clocks (two old Datum TymServe 2100s and one
    > unknown). Two are fairly close geographically and the third is about 2
    > miles away.
    >
    >
    > Currently, I have 4 systems (RHEL 3 on Dell HW) that are peered to each
    > other and use all three GPS clocks as references. OK, looks like I have
    > two others peered as well - similar to the others but RHEL 5.
    >
    > I also have another group of 4 servers peered together (RHEL 5 on Dell
    > except one that is running an NPAD server on Knoppix) in a similar setup
    > with the same three GPS clocks as well as two external Stratum 2 servers.
    >
    > iburst is on for all (saw that thread and will disable for externals!)


    This is probably a "red herring". There is normally no reason NOT to
    use iburst. Burst is another story! Several HUNDRED systems starting
    up simultaneously and all using iburst could cause problems but this is
    something you would have to work very hard to achieve! ;-)

    Iburst simply causes ntpd to send eight requests to a server at
    intervals of two seconds when it initializes. The eight replies that
    will normally result allow ntpd to fill its filter pipeline and make a
    pretty good guess at what time it is. Subsequent requests are sent at
    the normal poll intervals, ranging from 64 to 1024 seconds.

    Burst, OTOH, is a special purpose hack intended for systems getting time
    over a dial-up telephone line and where calls might be initiated two to
    four times per day and last for only a minute or two.

    >
    > On my own workstation, using 8 of the ten server mentioned above :-) I
    > see messages like this:
    >
    >
    > May 5 09:44:05 toto ntpd[5589]: synchronized to xxx.xxx.xxx.xxx, stratum 2
    >
    > where my client daemon seems to flip around from one of my servers to
    > the others.
    >
    > I have real servers that use the first 4 I described above as their
    > "server" entries and I see similar "synchronized" messages.
    >
    >
    > What can or should I do to make this more robust? Would a "prefer"
    > statement help? Should I use more or fewer peers?


    Prefer should put a stop to the "clock hopping". OTOH, as long as the
    clocks in question have the *same* time, the clock hopping should not
    cause problems.

  3. Re: Optimal config

    Richard B. Gilbert wrote:
    > Iburst simply causes ntpd to send eight requests to a server at
    > intervals of two seconds when it initializes. The eight replies that
    > will normally result allow ntpd to fill its filter pipeline and make a
    > pretty good guess at what time it is. Subsequent requests are sent at
    > the normal poll intervals, ranging from 64 to 1024 seconds.


    Unless the specified peer becomes unreachable, in which case the
    eight-packet burst will resume until the peer is reachable. For ntp-4.2.4p4
    anyway.

    Dennis

    --
    Dennis Hilberg, Jr. \ timekeeper(at)dennishilberg(dot)com
    NTP Server Information: \ http://saturn.dennishilberg.com/ntp.php

  4. Re: Optimal config

    Dennis Hilberg, Jr. wrote:
    > Richard B. Gilbert wrote:
    >> Iburst simply causes ntpd to send eight requests to a server at
    >> intervals of two seconds when it initializes. The eight replies that
    >> will normally result allow ntpd to fill its filter pipeline and make a
    >> pretty good guess at what time it is. Subsequent requests are sent at
    >> the normal poll intervals, ranging from 64 to 1024 seconds.

    >
    > Unless the specified peer becomes unreachable, in which case the
    > eight-packet burst will resume until the peer is reachable. For
    > ntp-4.2.4p4 anyway.
    >
    > Dennis
    >


    It seems as if the code might be improved a little. There's no point in
    whipping a dead horse! If no reply is received in response to any of
    the eight initial request packets, it would seem reasonable to do the
    "exponential back off"; wait one minute and try again, wait two minutes
    and try again, wait four minutes and try again. . . .

  5. Re: Optimal config

    Richard,

    It's a little more complicated than that. If the server is unreachable
    when a poll is scheduled, a single poll is sent. If no reply is heard,
    the cleint tries again in 64 s and repeats for a total of three times
    and returns to the original poll interval. If not heard after that, it
    backs off. This behavior is designed to protect busy servers while
    minimizing the time to resume after a length outage.

    If a reply is heard for the first poll, the client completes the iburst
    sequence for a total of six polls. This is almost always enough for
    initial synchroniztion. The same behavior is used for the burst mode,
    ehich is recommended when the poll interval is much greater than 1024 s.

    Dave

    Richard B. Gilbert wrote:
    > Dennis Hilberg, Jr. wrote:
    >
    >> Richard B. Gilbert wrote:
    >>
    >>> Iburst simply causes ntpd to send eight requests to a server at
    >>> intervals of two seconds when it initializes. The eight replies that
    >>> will normally result allow ntpd to fill its filter pipeline and make
    >>> a pretty good guess at what time it is. Subsequent requests are sent
    >>> at the normal poll intervals, ranging from 64 to 1024 seconds.

    >>
    >>
    >> Unless the specified peer becomes unreachable, in which case the
    >> eight-packet burst will resume until the peer is reachable. For
    >> ntp-4.2.4p4 anyway.
    >>
    >> Dennis
    >>

    >
    > It seems as if the code might be improved a little. There's no point in
    > whipping a dead horse! If no reply is received in response to any of
    > the eight initial request packets, it would seem reasonable to do the
    > "exponential back off"; wait one minute and try again, wait two minutes
    > and try again, wait four minutes and try again. . . .


  6. Re: Optimal config

    OK, so aside from not worrying about iburst .... ?


    Peter Laws wrote:
    > Hello, time nerds. :-)
    >
    > Here's what I want: accurate time to at least a few ms of UTC. Don't
    > think I have users that need better than that. I'd like the time to be
    > continuous and not jump around, of course.
    >
    >
    > Here's what I have: 3 GPS clocks (two old Datum TymServe 2100s and one
    > unknown). Two are fairly close geographically and the third is about 2
    > miles away.
    >
    >
    > Currently, I have 4 systems (RHEL 3 on Dell HW) that are peered to each
    > other and use all three GPS clocks as references. OK, looks like I have
    > two others peered as well - similar to the others but RHEL 5.
    >
    > I also have another group of 4 servers peered together (RHEL 5 on Dell
    > except one that is running an NPAD server on Knoppix) in a similar setup
    > with the same three GPS clocks as well as two external Stratum 2 servers.
    >
    > iburst is on for all (saw that thread and will disable for externals!)
    >
    > On my own workstation, using 8 of the ten server mentioned above :-) I see
    > messages like this:
    >
    >
    > May 5 09:44:05 toto ntpd[5589]: synchronized to xxx.xxx.xxx.xxx, stratum 2
    >
    > where my client daemon seems to flip around from one of my servers to the
    > others.
    >
    > I have real servers that use the first 4 I described above as their
    > "server" entries and I see similar "synchronized" messages.
    >
    >
    > What can or should I do to make this more robust? Would a "prefer"
    > statement help? Should I use more or fewer peers?



    --
    Peter Laws / N5UWY
    National Weather Center / Network Operations Center
    University of Oklahoma Information Technology
    plaws@ou.edu
    -----------------------------------------------------------------------
    Feedback? Contact my director, Craig Cochell, craigc@ou.edu. Thank you!

    _______________________________________________
    questions mailing list
    questions@lists.ntp.org
    https://lists.ntp.org/mailman/listinfo/questions

    --- StripMime Report -- processed MIME parts ---
    multipart/mixed
    text/plain (text body -- kept)
    text/plain (text body -- kept)
    ---

+ Reply to Thread