broadcast client - NTP

This is a discussion on broadcast client - NTP ; Hello, I have a question regarding broadcast mode. I have 6 machines synchronizing with the same ntp server. That server uses uses a local ntp displined clock (we are looking into a GPS one). The machines are connected via 1 ...

+ Reply to Thread
Results 1 to 12 of 12

Thread: broadcast client

  1. broadcast client

    Hello,
    I have a question regarding broadcast mode. I have 6 machines
    synchronizing with the same ntp server. That server uses uses a local
    ntp displined clock (we are looking into a GPS one). The machines are
    connected via 1 Gbps switch. The network is lightly loaded and I
    configured the clients as such

    server ntp minpoll 4 maxpoll 4 iburst

    Howerver, I notice that two clients have a relatively large offset
    from the ntp server (greater than 100 micro-seconds according to "ntpq
    -p"). I considered setting the server into broadcast mode and
    enabling broadcastclient on the clients to avoid 6 machines polling
    the same server every 16 seconds. I have two questions regarding
    this. What is the syntax for setting the frequency of broadcasts on
    the server? Also, how can I check the approximate time offset between
    the clients and the server? Should I then peer each client with the
    ntp server and rely on broadcast messages for time synchronization?

    Thanks


  2. Re: broadcast client

    rochertov@gmail.com wrote:
    > Hello,
    > I have a question regarding broadcast mode. I have 6 machines
    > synchronizing with the same ntp server. That server uses uses a local
    > ntp displined clock (we are looking into a GPS one). The machines are
    > connected via 1 Gbps switch. The network is lightly loaded and I
    > configured the clients as such
    >
    > server ntp minpoll 4 maxpoll 4 iburst
    >
    > Howerver, I notice that two clients have a relatively large offset
    > from the ntp server (greater than 100 micro-seconds according to "ntpq
    > -p"). I considered setting the server into broadcast mode and
    > enabling broadcastclient on the clients to avoid 6 machines polling
    > the same server every 16 seconds. I have two questions regarding
    > this. What is the syntax for setting the frequency of broadcasts on
    > the server? Also, how can I check the approximate time offset between
    > the clients and the server? Should I then peer each client with the
    > ntp server and rely on broadcast messages for time synchronization?
    >


    Remove the "minpoll 4 maxpoll 4" and let ntpd work as designed. When
    you force the polling interval to 16 seconds that way, you deprive ntpd
    of the ability to accurately measure and correct small errors.

  3. Re: broadcast client

    "Richard B. Gilbert" writes:

    >rochertov@gmail.com wrote:
    >> Hello,
    >> I have a question regarding broadcast mode. I have 6 machines
    >> synchronizing with the same ntp server. That server uses uses a local
    >> ntp displined clock (we are looking into a GPS one). The machines are
    >> connected via 1 Gbps switch. The network is lightly loaded and I
    >> configured the clients as such
    >>
    >> server ntp minpoll 4 maxpoll 4 iburst
    >>
    >> Howerver, I notice that two clients have a relatively large offset
    >> from the ntp server (greater than 100 micro-seconds according to "ntpq
    >> -p"). I considered setting the server into broadcast mode and
    >> enabling broadcastclient on the clients to avoid 6 machines polling
    >> the same server every 16 seconds. I have two questions regarding
    >> this. What is the syntax for setting the frequency of broadcasts on
    >> the server? Also, how can I check the approximate time offset between
    >> the clients and the server? Should I then peer each client with the
    >> ntp server and rely on broadcast messages for time synchronization?
    >>


    >Remove the "minpoll 4 maxpoll 4" and let ntpd work as designed. When
    >you force the polling interval to 16 seconds that way, you deprive ntpd
    >of the ability to accurately measure and correct small errors.


    No it does a good job of correcting and measuring small phase errors. It is
    the drift errors it has trouble with with such a small poll interval (bad
    design, since it certainly has the information needed to get long term
    drifts correct as well) Ie, If
    you expect the clock to be without a source for a length of time, then
    having long poll intervals is good. If you do not want to pollute the net
    with all your ntp packets, longer poll intervals are good. If you expect to
    be constantly connected and do not give a damn about pollution because it
    is your own system you are using as a source, then short intervals are
    good. Why in the world does the OP care about the 16 clients polling every
    16 sec? That is 16 bytes per second. That is trivial. If you have 16000000
    clients polling, as some of the main time sources sometimes do then it
    becomes a concern.

    Broadcast mode is far less accurate than polling-- no roundtrip
    calculation, etc.
    If your offset is greater than 100usec and they are on a local lan, you hae
    network problems. It is not a problem with ntp. Fix your network


  4. Re: broadcast client

    > No it does a good job of correcting and measuring small phase errors. It is
    > the drift errors it has trouble with with such a small poll interval (bad
    > design, since it certainly has the information needed to get long term
    > drifts correct as well) *Ie, If
    > you expect the clock to be without a source for a length of time, then
    > having long poll intervals is good. If you do not want to pollute the net
    > with all your ntp packets, longer poll intervals are good. If you expect to
    > be constantly connected and do not give a damn about pollution because it
    > is your own system you are using as a source, then short intervals are
    > good. Why in the world does the OP care about the 16 clients polling every
    > 16 sec? That is 16 bytes per second. That is trivial. If you have 16000000
    > clients polling, as some of the main time sources sometimes do then it
    > becomes a concern.


    In the lab we have no real concern for network pollution. We just
    want good accuracy
    for our network experiments.

    >
    > Broadcast mode is far less accurate than polling-- no roundtrip
    > calculation, etc.
    > If your offset is greater than 100usec and they are on a local lan, you hae
    > network problems. It is not a problem with ntp. Fix your network


    Is there a way to force ntp to listen to only one interface? The
    machine which
    has synch issues has 10 network interfaces and according to the log,
    ntp
    attempts to listen on all of them. I am wondering if somehow that
    causes
    issues.


  5. Re: broadcast client

    rochertov@gmail.com wrote:
    > Hello,
    > I have a question regarding broadcast mode. I have 6 machines
    > synchronizing with the same ntp server. That server uses uses a local
    > ntp displined clock (we are looking into a GPS one). The machines are


    "local ntp discipline" is a contradiction in terms. I think you mean a
    completely undisciplined local clock configured to be the NTP reference
    clock.

    > connected via 1 Gbps switch. The network is lightly loaded and I
    > configured the clients as such
    >
    > server ntp minpoll 4 maxpoll 4 iburst


    Dave Mills, please note, yet another non-believer in the NTP algorithms.

    Rochertov, please note that this will increase your vulnerability to
    short term network propagation delay variations, although it may help if
    you are subject to temperature variations. Note that NTP doesn't
    correct the time on each poll but rather updates the frequency, and rate
    of change of frequency.
    >
    > Howerver, I notice that two clients have a relatively large offset
    > from the ntp server (greater than 100 micro-seconds according to "ntpq


    Firstly, see other posts over the last month about the fact that offset
    is not the same as error. I would say that, for a suitable OS and
    hardware, with negligible work load, 100 microseconds was a reasonable
    90 percentile figure. You should not expect to achieve it 100% of the time.

    What OS are you using?

    How do the offsets vary with time? An offset with a constant sign would
    be very weird and would indicate a serious problem. To the extent that
    the time discipline algorithm is good,

    > -p"). I considered setting the server into broadcast mode and
    > enabling broadcastclient on the clients to avoid 6 machines polling
    > the same server every 16 seconds. I have two questions regarding
    > this. What is the syntax for setting the frequency of broadcasts on
    > the server? Also, how can I check the approximate time offset between


    I believe you can measure the "offset" the same way as with a named
    server, although it will be an even worse measure of error, as it will
    also include the effects of changes in network loading between the time
    that the round trip time was calibrated and the time of the current
    measurement.

    For error, you can also use the same methods, probably using OS kernel
    modifications to output exact phase information using a PCI or ISA
    parallel port, or modem control line, and then using specialist hardware
    to compare the timing between two systems. I'd suggest outputting the
    clock interrupt time this way, and outputting the difference between
    tick time and a round number multiple of the nominal tick time via a
    software channel.

    > the clients and the server? Should I then peer each client with the
    > ntp server and rely on broadcast messages for time synchronization?


    Peering and broadcasting are different.

  6. Re: broadcast client

    achertova@gmail.com wrote:

    > Is there a way to force ntp to listen to only one interface? The
    > machine which
    > has synch issues has 10 network interfaces and according to the log,
    > ntp
    > attempts to listen on all of them. I am wondering if somehow that
    > causes
    > issues.
    >


    Listening on all the interfaces will only make any significant
    difference if you are receiving NTP packets on more than one interface.

    However, having 10 interfaces is extreme and makes me question whether
    the system is really sufficiently lightly loaded.

  7. Re: broadcast client

    David Woolley writes:

    >rochertov@gmail.com wrote:
    >> Hello,
    >> I have a question regarding broadcast mode. I have 6 machines
    >> synchronizing with the same ntp server. That server uses uses a local
    >> ntp displined clock (we are looking into a GPS one). The machines are


    >"local ntp discipline" is a contradiction in terms. I think you mean a
    >completely undisciplined local clock configured to be the NTP reference
    >clock.


    No he is talking about an ntp server disciplined by a local hardware time
    source-- local not in the technical ntp sense but in the spatial sense.


    >> connected via 1 Gbps switch. The network is lightly loaded and I
    >> configured the clients as such
    >>
    >> server ntp minpoll 4 maxpoll 4 iburst


    >Dave Mills, please note, yet another non-believer in the NTP algorithms.


    What this has to do with not believing in the algorithm I have no idea. If
    ntp runs from a refclock that is EXACTLY the default behaviour. Running on
    a local private network where you are referencing your own server, that
    behaviour is also fine. The reason for the backup to long poll intervals is
    a) to save the public servers from flooding, and b)to discipline the local
    clock's drift rate in case there are long periods of disconnection from the
    net. If you have constant connection and it is your own server, neither of
    those apply, and short polling is better.



    >Rochertov, please note that this will increase your vulnerability to
    >short term network propagation delay variations, although it may help if
    >you are subject to temperature variations. Note that NTP doesn't
    >correct the time on each poll but rather updates the frequency, and rate
    >of change of frequency.


    I have no idea why you make the first claim. Yes, the rate will vary as the
    network rates vary, but who cares. The purpose is to discipline the TIME,
    not the rate. And short polls discipline the time better. Rate discipline
    is overrated because of the rate variations due to temperature changes
    during the day anyway.

    >>
    >> Howerver, I notice that two clients have a relatively large offset
    >> from the ntp server (greater than 100 micro-seconds according to "ntpq


    >Firstly, see other posts over the last month about the fact that offset
    >is not the same as error. I would say that, for a suitable OS and
    >hardware, with negligible work load, 100 microseconds was a reasonable
    >90 percentile figure. You should not expect to achieve it 100% of the time.


    ??? On a local network 20usec is more like the mean error, assuming a
    network that is not heavily loaded.


    >What OS are you using?


    >How do the offsets vary with time? An offset with a constant sign would
    >be very weird and would indicate a serious problem. To the extent that
    >the time discipline algorithm is good,


    >> -p"). I considered setting the server into broadcast mode and
    >> enabling broadcastclient on the clients to avoid 6 machines polling
    >> the same server every 16 seconds. I have two questions regarding
    >> this. What is the syntax for setting the frequency of broadcasts on
    >> the server? Also, how can I check the approximate time offset between


    >I believe you can measure the "offset" the same way as with a named
    >server, although it will be an even worse measure of error, as it will
    >also include the effects of changes in network loading between the time
    >that the round trip time was calibrated and the time of the current
    >measurement.


    Agreed. There is absolutely no advantage to broadcast for his situation
    IMHO.


    >For error, you can also use the same methods, probably using OS kernel
    >modifications to output exact phase information using a PCI or ISA
    >parallel port, or modem control line, and then using specialist hardware
    >to compare the timing between two systems. I'd suggest outputting the
    >clock interrupt time this way, and outputting the difference between
    >tick time and a round number multiple of the nominal tick time via a
    >software channel.


    > > the clients and the server? Should I then peer each client with the
    > > ntp server and rely on broadcast messages for time synchronization?


    >Peering and broadcasting are different.


  8. Re: broadcast client

    >
    > What this has to do with not believing in the algorithm I have no idea. If
    > ntp runs from a refclock that is EXACTLY the default behaviour. Running on
    > a local private network where you are referencing your own server, that
    > behaviour is also fine. The reason for the backup to long poll intervals is
    > a) to save the public servers from flooding, and b)to discipline the local
    > clock's drift rate in case there are long periods of disconnection from the
    > net. If you have constant connection and it is your own server, neither of
    > those apply, and short polling is better.


    Thanks, that is the answer I was looking for. In our case, ntp load
    on the ntp server and the network is irrelevant. We are just seeking
    the best synchronization possible, short of getting a PPS signal to
    every machine. I will change the machine not to use broadcast mode
    and rely on min polling with maxpoll set to minpoll as well. I will
    also enable iburst, as the docs indicate that it might increase
    performance by loading up the network a bit more.

    Roman


  9. Re: broadcast client

    rochertov@gmail.com writes:

    >>
    >> What this has to do with not believing in the algorithm I have no idea. If
    >> ntp runs from a refclock that is EXACTLY the default behaviour. Running on
    >> a local private network where you are referencing your own server, that
    >> behaviour is also fine. The reason for the backup to long poll intervals is
    >> a) to save the public servers from flooding, and b)to discipline the local
    >> clock's drift rate in case there are long periods of disconnection from the
    >> net. If you have constant connection and it is your own server, neither of
    >> those apply, and short polling is better.


    >Thanks, that is the answer I was looking for. In our case, ntp load
    >on the ntp server and the network is irrelevant. We are just seeking
    >the best synchronization possible, short of getting a PPS signal to
    >every machine. I will change the machine not to use broadcast mode
    >and rely on min polling with maxpoll set to minpoll as well. I will
    >also enable iburst, as the docs indicate that it might increase
    >performance by loading up the network a bit more.


    On a local network, I doubt it. It would primarily "prime the pump" by
    having the IP addresses/mac addresses already in the network system when
    the ntp
    packet is sent. But on a local network that would almost certainly not be a
    problem, especially not with a poll interval of 16 sec. The decay time of
    router memories is usually longer than that.
    Now if you have 10^7 machines on your local net, then I might worry.


  10. Re: broadcast client

    Unruh wrote:
    > David Woolley writes:
    >
    >> rochertov@gmail.com wrote:
    >>> Hello,
    >>> I have a question regarding broadcast mode. I have 6 machines
    >>> synchronizing with the same ntp server. That server uses uses a local
    >>> ntp displined clock (we are looking into a GPS one). The machines are

    >
    >> "local ntp discipline" is a contradiction in terms. I think you mean a
    >> completely undisciplined local clock configured to be the NTP reference
    >> clock.

    >
    > No he is talking about an ntp server disciplined by a local hardware time
    > source-- local not in the technical ntp sense but in the spatial sense.
    >

    What he says is:

    > synchronizing with the same ntp server. That server uses uses a local
    > ntp displined clock (we are looking into a GPS one). The machines are
    > connected via 1 Gbps switch. The network is lightly loaded and I
    > configured the clients as such


    To me that says he is planning to use a physically local GPS clock, but
    currently has no means of disciplining the server. If he had an
    alternative means of disciplining the clock:

    a) he probably wouldn't have mentioned the move to GPS;
    b) it is a sufficiently unusual case, I would have expected him to have
    provided details.

    Moreover, looking at the passage as a whole, it looks to me as though he
    has been reading typical responses on the newsgroup, and is trying to
    anticipate the normal challenges he would get about the lack of a source
    of discipline and the locking down of the polling rate.

    >
    >>> connected via 1 Gbps switch. The network is lightly loaded and I
    >>> configured the clients as such
    >>>
    >>> server ntp minpoll 4 maxpoll 4 iburst

    >
    >> Dave Mills, please note, yet another non-believer in the NTP algorithms.

    >
    > What this has to do with not believing in the algorithm I have no idea. If
    > ntp runs from a refclock that is EXACTLY the default behaviour. Running on


    It's not running from a refclock, but over a network and Dave Mills
    assumes that when running over a network one needs minpoll 6 maxpoll 10.

    There are interesting questions about why reference clocks are treated
    differently, but one guess would be that they are not subject to network
    contention delays. In that case he is challenging the assumption that
    network contention delays are still important in all configurations.

    > a local private network where you are referencing your own server, that
    > behaviour is also fine. The reason for the backup to long poll intervals is
    > a) to save the public servers from flooding, and b)to discipline the local
    > clock's drift rate in case there are long periods of disconnection from the
    > net. If you have constant connection and it is your own server, neither of
    > those apply, and short polling is better.


    If he weren't challenging Dave Mills position, he would have assumed
    that the errors came from relatively high frequency phase error
    variations and relatively low frequency frequency variations. In that
    case once you've passed the point where of maximum phase noise,
    increasing the poll interval will not significantly degrade the phase
    measurement until the frequency variations start to have an effect, but
    it will continue to improve the frequency accuracy and the measurement
    accuracy for time intervals.

    Your observations suggest there are high frequency components in the
    frequency variations, and, therefore, the cross over point is much lower
    than NTP is built to assume. That then justifies a forced fast poll
    rate (although in the case of maxpoll, only if the software fails to
    find the right rate adaptively). If the min and maxpoll are being set
    on that basis, he is, again, assuming that Dave Mills design assumptions
    are wrong.

    > ??? On a local network 20usec is more like the mean error, assuming a
    > network that is not heavily loaded.


    My impressions (although I haven't used a local clock on fast, quiet,
    networks for a long time) is that that might be about right for the
    error, but he is measuring offset, and my experience would suggest that
    often >100 microseconds is more realistic.



  11. Re: broadcast client

    David Woolley writes:

    >Unruh wrote:
    >> David Woolley writes:
    >>
    >>> rochertov@gmail.com wrote:
    >>>> Hello,
    >>>> I have a question regarding broadcast mode. I have 6 machines
    >>>> synchronizing with the same ntp server. That server uses uses a local
    >>>> ntp displined clock (we are looking into a GPS one). The machines are

    >>
    >>> "local ntp discipline" is a contradiction in terms. I think you mean a
    >>> completely undisciplined local clock configured to be the NTP reference
    >>> clock.

    >>
    >> No he is talking about an ntp server disciplined by a local hardware time
    >> source-- local not in the technical ntp sense but in the spatial sense.
    >>

    >What he says is:


    >> synchronizing with the same ntp server. That server uses uses a local
    >> ntp displined clock (we are looking into a GPS one). The machines are
    >> connected via 1 Gbps switch. The network is lightly loaded and I
    >> configured the clients as such


    >To me that says he is planning to use a physically local GPS clock, but
    >currently has no means of disciplining the server. If he had an
    >alternative means of disciplining the clock:


    It would be more helpful if he told us what he meant rather than us arguing
    about what he could have meant. Yours is a possible interpretation.

    >a) he probably wouldn't have mentioned the move to GPS;
    >b) it is a sufficiently unusual case, I would have expected him to have
    >provided details.


    >Moreover, looking at the passage as a whole, it looks to me as though he
    >has been reading typical responses on the newsgroup, and is trying to
    >anticipate the normal challenges he would get about the lack of a source
    >of discipline and the locking down of the polling rate.


    >>
    >>>> connected via 1 Gbps switch. The network is lightly loaded and I
    >>>> configured the clients as such
    >>>>
    >>>> server ntp minpoll 4 maxpoll 4 iburst

    >>
    >>> Dave Mills, please note, yet another non-believer in the NTP algorithms.

    >>
    >> What this has to do with not believing in the algorithm I have no idea. If
    >> ntp runs from a refclock that is EXACTLY the default behaviour. Running on


    >It's not running from a refclock, but over a network and Dave Mills
    >assumes that when running over a network one needs minpoll 6 maxpoll 10.


    As I understand his arguments this is based on some assumptions-- handling
    ntp especially thousands or millions of clients, you do not want too
    frequent a query-- smaller poll intervals are needed early on so that the
    time required for phase lock is not too long, but longer intervals are
    needed to minimize network impact-- Longer time intervals discipline the
    frequency (drift) so that in case the closk goes offline, it still has a
    good longer term baseline, because the drift discipline is inversely
    proportional to the poll interval in the simple second order response
    function he uses while the phase error is relatively constant. (This is not
    true once the drift fluctuations are, over the poll interval, larger than
    the phase fluctuations.)
    Ie,, one of the critical "needs" is minimizing impact on the servers. That
    is not there in this case. It is his own seerver which he can query as
    often as desired.

    So then rapid querying deos not do much about the phase errors but does
    allow much more rapid adaptation to drift fluctuations, and reduces those.
    It has worse long term behaviour as far as drift is concerned( ie it does a
    worse job in figuring out what the long term drift is) but that is
    irrelevalnt if you query often. If you switch from frequenct queries to
    rare, then this will cause trouble.

    Thus the standard defaults are based on minimizing impact on servers, and
    minimizing errors when the connectivity fails for appreciable periods .


    >There are interesting questions about why reference clocks are treated
    >differently, but one guess would be that they are not subject to network
    > contention delays. In that case he is challenging the assumption that
    >network contention delays are still important in all configurations.


    >> a local private network where you are referencing your own server, that
    >> behaviour is also fine. The reason for the backup to long poll intervals is
    >> a) to save the public servers from flooding, and b)to discipline the local
    >> clock's drift rate in case there are long periods of disconnection from the
    >> net. If you have constant connection and it is your own server, neither of
    >> those apply, and short polling is better.


    >If he weren't challenging Dave Mills position, he would have assumed
    >that the errors came from relatively high frequency phase error
    >variations and relatively low frequency frequency variations. In that
    >case once you've passed the point where of maximum phase noise,
    >increasing the poll interval will not significantly degrade the phase
    >measurement until the frequency variations start to have an effect, but
    >it will continue to improve the frequency accuracy and the measurement
    >accuracy for time intervals.


    >Your observations suggest there are high frequency components in the
    >frequency variations, and, therefore, the cross over point is much lower
    >than NTP is built to assume. That then justifies a forced fast poll
    >rate (although in the case of maxpoll, only if the software fails to
    >find the right rate adaptively). If the min and maxpoll are being set
    >on that basis, he is, again, assuming that Dave Mills design assumptions
    >are wrong.


    >> ??? On a local network 20usec is more like the mean error, assuming a
    >> network that is not heavily loaded.


    >My impressions (although I haven't used a local clock on fast, quiet,
    >networks for a long time) is that that might be about right for the
    >error, but he is measuring offset, and my experience would suggest that
    >often >100 microseconds is more realistic.




  12. Re: broadcast client

    To provide some details about the setup and to clarify some points.
    We are creating an emulated testbed for TDMA experiments, where
    several machines connected over the same VLAN follow a TDMA schedule.
    We have a second control network that is lightly loaded and does not
    see any experimental traffic. For us, it is critical to ensure that
    all the nodes participating in the TDMA schedule have pretty much the
    same time with a small offset (ideally under 100 micro-seconds). For
    the expriment, it is not important if the local time is anywhere close
    to true global time, hence we don't synch with stratum 1 ntp server
    (we have a flaky Internet connection). Additionally, we are not
    concenred with drift or clock skew that happens over periods of longer
    than a few hours. Ideally, we would have a GPS PPS pulse driving ntp
    on the ntp server; however, we don't have one yet and have to use the
    clock of the machine. It was my impression of reading the documents
    that frequent polling of a ntp server on the same fast LAN will
    minimize time offsets, as that is our primary concern. I also would
    not call myself a non-beliver of ntp algorithms. I am just trying to
    configure ntp to best fit our needs, which are quite different from
    the usual situation.

+ Reply to Thread