This is a discussion on More problems with multicast server as peer - NTP ; Moin, I'm using the p118 tarball with Linux both sending and receiving multicast (IPv4+6) timestamps. As far as I know, the sending machine is operating as expected -- `tcpdump' shows the desired broadcasts about every 16 seconds on the two ...
I'm using the p118 tarball with Linux both sending and receiving
multicast (IPv4+6) timestamps.
As far as I know, the sending machine is operating as expected --
`tcpdump' shows the desired broadcasts about every 16 seconds on
the two multicast addresses.
Also, the listening machine can successfully initialise itself
to these timestamps, usually, but after that, there are problems.
I have only a single machine spitting out the m'cast, synced to a
What seems to happen on the listening machine is that the `ntpq -pn'
`poll' field is set to 16 seconds (fair enough; however I haven't
found the magic words to alter this), and the `reach' field shows
an octal value that doubles every two seconds, rather than the
typical behaviour of changing value at every `poll' seconds.
That means that just before the `poll' value of 16 seconds passes,
the `reach' value increases to `0200' when last seen:
remote refid st t when poll reach delay offset jitter
fe80::200:c0ff: .DCFa. 1 m 15 16 200 0.688 -3104.7 0.004
then disappears briefly, until it's heard from again, and the cycle
starts anew with poll at 01.
This is enough to keep the listening ntpd from receiving any further
useful data from these m'casts, so, while it may have stepped the
time at startup, it never locks back on, and thus drifts.
Here's another one or two for laughs.
fe80::200:c0ff: .DCFa. 1 m 8 16 20 0.688 -3098.8 0.163
fe80::200:c0ff: .DCFa. 1 m 13 16 100 0.688 -3101.7 0.004
This is the same for both IPv4 and IPv6 multicasts; I'm unable to
lock onto the single peer sending time data every 16 seconds.
Is this to be expected? Or am I doing something horribly wrong?
I'm assuming the sending end works, as I can initially sync-step
to that, but the rapid increase of the `reach' field makes me
think something else is wrong.