NTP phase lock loop inputs and outputs? - NTP

This is a discussion on NTP phase lock loop inputs and outputs? - NTP ; Bill, If you need only the frequency, least-squares doesn't help a lot; all you need are the first and last points during the measurement interval. The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency directly and exponentially average successive ...

+ Reply to Thread
Page 2 of 2 FirstFirst 1 2
Results 21 to 40 of 40

Thread: NTP phase lock loop inputs and outputs?

  1. Re: NTP phase lock loop inputs and outputs?

    Bill,

    If you need only the frequency, least-squares doesn't help a lot; all
    you need are the first and last points during the measurement interval.
    The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    directly and exponentially average successive intervals. The NTP
    discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    Allan intercept and FLL above it and also when started without a
    frequency file. The trick is to separate the phase component from the
    frequency component, which requires some delicate computations. This
    allows the frequency to be accurately computed as above, yet allows a
    phase correction during the measurement interval.

    Dave

    Unruh wrote:
    > David Woolley writes:
    >
    >
    >>Unruh wrote:

    >
    >
    >>>I do not understand this. You seem to be measuring the offsets, not the
    >>>frequencies. The offset is irrelevant. What you want to do is to measure

    >
    >
    >>Measuring phase error to control frequency is pretty much THE standard
    >>way of doing it in modern electronics. It's called a phase locked loop

    >
    >
    > Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    > phase error slowly by changing the frequency. This has the advantage that
    > the frequency error also gets reduced (slowly). He wants to reduce the
    > frequency error only. He does not give a damn about the phase error
    > apparently. Thus you do NOT want to reduce the frequecy error by attacking
    > the phase error. That is a slow way of doing it. You want to estimate the
    > frequency error directly. Now in his case he is doing so by measuring the
    > phase, so you need at least two phase measurements to estimate the
    > frequency error. But you do NOT want to reduce the frequency error by
    > reducing the phase error-- far too slow.
    >
    > One way of reducing the frequency error is to use the ntp procedure but
    > applied to the frequency. But you must feed in an estimate of the frequecy
    > error. Anothr way is the chrony technique. -- collect phase points, do a
    > least squares fit to find the frequency, and then use that information to
    > drive the frequecy to zero. To reuse past data, also correct the prior
    > phase measurements by the change in frequency.
    > (t_{i-j}-=(t_{i}-t_{i-j}) df
    >
    >
    >>(PLL) and it is getting difficult to find any piece of electrnics that
    >>doesn't include one these days. E.g. the typical digitally tuned radio

    >
    >
    > A PLL is a dirt simply thing to impliment electronically. A few resistors
    > and capacitors. It however is a very simply Markovian process. There is far
    > more information in the data than that, and digititally it is easy to
    > impliment far more complex feedback loops than that.
    >
    >
    >
    >>or TV has a crystal oscillator, which is divided down to the channel
    >>spacing or a sub-multiple, and a configurable divider on the local
    >>oscillator divides that down to the same frequency. The resulting two
    >>signals are then phase locked, by measuring the phase error on each
    >>cycle, low pass filtering it, and using it to control the local
    >>oscillator frequency, resulting in their matching in frequency, and
    >>having some constant phase error.

    >
    >
    >>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>eoffset does not correspond to being off by 5Hz.

    >
    >
    >>ntpd only uses this method on a cold start, to get the initial coarse
    >>calibration. Typical electronic implementations don't use it at all,
    >>but either do a frequency sweep or simply open up the low pass filter,
    >>to get initial lock.

    >
    >
    > And? You are claiming that that is efficient or easy? I would claim the
    > latter. And his requirements are NOT ntp's requirements. He does not care
    > about the phase errors. He is onlyconcerned about the frequency errors.
    > driving the frequency errors to zero by driving the phase errors to zero is
    > not a very efficient technique-- unless of course you want the phase errors
    > to be zero( as ntp does, and he does not).
    >
    >
    >
    >


  2. Re: NTP phase lock loop inputs and outputs?

    "David L. Mills" writes:

    >Bill,


    >If you need only the frequency, least-squares doesn't help a lot; all
    >you need are the first and last points during the measurement interval.


    Well, no. If you have random phase noise, a least squares fit will improve
    the above estimate by roughly sqrt(n/4) where n is the number of points.
    That can be significant. It is certainly true that the end points have the
    most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    you are better by about a factor of 4 which is not insignificant.

    >The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >directly and exponentially average successive intervals. The NTP
    >discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >Allan intercept and FLL above it and also when started without a
    >frequency file. The trick is to separate the phase component from the
    >frequency component, which requires some delicate computations. This
    >allows the frequency to be accurately computed as above, yet allows a
    >phase correction during the measurement interval.


    He of course is not interested in phase corrections.



    >Dave


    >Unruh wrote:
    >> David Woolley writes:
    >>
    >>
    >>>Unruh wrote:

    >>
    >>
    >>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>frequencies. The offset is irrelevant. What you want to do is to measure

    >>
    >>
    >>>Measuring phase error to control frequency is pretty much THE standard
    >>>way of doing it in modern electronics. It's called a phase locked loop

    >>
    >>
    >> Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >> phase error slowly by changing the frequency. This has the advantage that
    >> the frequency error also gets reduced (slowly). He wants to reduce the
    >> frequency error only. He does not give a damn about the phase error
    >> apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >> the phase error. That is a slow way of doing it. You want to estimate the
    >> frequency error directly. Now in his case he is doing so by measuring the
    >> phase, so you need at least two phase measurements to estimate the
    >> frequency error. But you do NOT want to reduce the frequency error by
    >> reducing the phase error-- far too slow.
    >>
    >> One way of reducing the frequency error is to use the ntp procedure but
    >> applied to the frequency. But you must feed in an estimate of the frequecy
    >> error. Anothr way is the chrony technique. -- collect phase points, do a
    >> least squares fit to find the frequency, and then use that information to
    >> drive the frequecy to zero. To reuse past data, also correct the prior
    >> phase measurements by the change in frequency.
    >> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>
    >>
    >>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>doesn't include one these days. E.g. the typical digitally tuned radio

    >>
    >>
    >> A PLL is a dirt simply thing to impliment electronically. A few resistors
    >> and capacitors. It however is a very simply Markovian process. There is far
    >> more information in the data than that, and digititally it is easy to
    >> impliment far more complex feedback loops than that.
    >>
    >>
    >>
    >>>or TV has a crystal oscillator, which is divided down to the channel
    >>>spacing or a sub-multiple, and a configurable divider on the local
    >>>oscillator divides that down to the same frequency. The resulting two
    >>>signals are then phase locked, by measuring the phase error on each
    >>>cycle, low pass filtering it, and using it to control the local
    >>>oscillator frequency, resulting in their matching in frequency, and
    >>>having some constant phase error.

    >>
    >>
    >>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>eoffset does not correspond to being off by 5Hz.

    >>
    >>
    >>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>calibration. Typical electronic implementations don't use it at all,
    >>>but either do a frequency sweep or simply open up the low pass filter,
    >>>to get initial lock.

    >>
    >>
    >> And? You are claiming that that is efficient or easy? I would claim the
    >> latter. And his requirements are NOT ntp's requirements. He does not care
    >> about the phase errors. He is onlyconcerned about the frequency errors.
    >> driving the frequency errors to zero by driving the phase errors to zero is
    >> not a very efficient technique-- unless of course you want the phase errors
    >> to be zero( as ntp does, and he does not).
    >>
    >>
    >>
    >>


  3. Re: NTP phase lock loop inputs and outputs?

    Unruh writes:

    >"David L. Mills" writes:


    >>Bill,


    >>If you need only the frequency, least-squares doesn't help a lot; all
    >>you need are the first and last points during the measurement interval.


    >Well, no. If you have random phase noise, a least squares fit will improve
    >the above estimate by roughly sqrt(n/4) where n is the number of points.
    >That can be significant. It is certainly true that the end points have the
    >most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >you are better by about a factor of 4 which is not insignificant.


    Oops, I think it is sqrt(n/6) ( for large n) A simpler thing is to take the
    average of the first third, of the last third, subtract them and divide by
    the time difference between the average of the two intervals. This gives a
    factor of sqrt(4n/27) improvement instead of sqrt (n/6) -- a minor amount
    worse, but probably much easier to calculate.



    >>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>directly and exponentially average successive intervals. The NTP
    >>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>Allan intercept and FLL above it and also when started without a
    >>frequency file. The trick is to separate the phase component from the
    >>frequency component, which requires some delicate computations. This
    >>allows the frequency to be accurately computed as above, yet allows a
    >>phase correction during the measurement interval.


    >He of course is not interested in phase corrections.




    >>Dave


    >>Unruh wrote:
    >>> David Woolley writes:
    >>>
    >>>
    >>>>Unruh wrote:
    >>>
    >>>
    >>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>
    >>>
    >>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>
    >>>
    >>> Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>> phase error slowly by changing the frequency. This has the advantage that
    >>> the frequency error also gets reduced (slowly). He wants to reduce the
    >>> frequency error only. He does not give a damn about the phase error
    >>> apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>> the phase error. That is a slow way of doing it. You want to estimate the
    >>> frequency error directly. Now in his case he is doing so by measuring the
    >>> phase, so you need at least two phase measurements to estimate the
    >>> frequency error. But you do NOT want to reduce the frequency error by
    >>> reducing the phase error-- far too slow.
    >>>
    >>> One way of reducing the frequency error is to use the ntp procedure but
    >>> applied to the frequency. But you must feed in an estimate of the frequecy
    >>> error. Anothr way is the chrony technique. -- collect phase points, do a
    >>> least squares fit to find the frequency, and then use that information to
    >>> drive the frequecy to zero. To reuse past data, also correct the prior
    >>> phase measurements by the change in frequency.
    >>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>
    >>>
    >>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>
    >>>
    >>> A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>> and capacitors. It however is a very simply Markovian process. There is far
    >>> more information in the data than that, and digititally it is easy to
    >>> impliment far more complex feedback loops than that.
    >>>
    >>>
    >>>
    >>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>oscillator divides that down to the same frequency. The resulting two
    >>>>signals are then phase locked, by measuring the phase error on each
    >>>>cycle, low pass filtering it, and using it to control the local
    >>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>having some constant phase error.
    >>>
    >>>
    >>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>eoffset does not correspond to being off by 5Hz.
    >>>
    >>>
    >>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>calibration. Typical electronic implementations don't use it at all,
    >>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>to get initial lock.
    >>>
    >>>
    >>> And? You are claiming that that is efficient or easy? I would claim the
    >>> latter. And his requirements are NOT ntp's requirements. He does not care
    >>> about the phase errors. He is onlyconcerned about the frequency errors.
    >>> driving the frequency errors to zero by driving the phase errors to zero is
    >>> not a very efficient technique-- unless of course you want the phase errors
    >>> to be zero( as ntp does, and he does not).
    >>>
    >>>
    >>>
    >>>


  4. Re: NTP phase lock loop inputs and outputs?

    Bill,

    Ahem. The first point I made was that least-squares doesn't help the
    frequency estimate. The next point you made is that least-squares
    improves the phase estimate. The last point you made is that phase noise
    is not important. Our points have been made and further discussion would
    be boring.

    Dave

    Unruh wrote:
    > "David L. Mills" writes:
    >
    >
    >>Bill,

    >
    >
    >>If you need only the frequency, least-squares doesn't help a lot; all
    >>you need are the first and last points during the measurement interval.

    >
    >
    > Well, no. If you have random phase noise, a least squares fit will improve
    > the above estimate by roughly sqrt(n/4) where n is the number of points.
    > That can be significant. It is certainly true that the end points have the
    > most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    > you are better by about a factor of 4 which is not insignificant.
    >
    >
    >>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>directly and exponentially average successive intervals. The NTP
    >>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>Allan intercept and FLL above it and also when started without a
    >>frequency file. The trick is to separate the phase component from the
    >>frequency component, which requires some delicate computations. This
    >>allows the frequency to be accurately computed as above, yet allows a
    >>phase correction during the measurement interval.

    >
    >
    > He of course is not interested in phase corrections.
    >
    >
    >
    >
    >>Dave

    >
    >
    >>Unruh wrote:
    >>
    >>>David Woolley writes:
    >>>
    >>>
    >>>
    >>>>Unruh wrote:
    >>>
    >>>
    >>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>
    >>>
    >>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>
    >>>
    >>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>phase error slowly by changing the frequency. This has the advantage that
    >>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>frequency error only. He does not give a damn about the phase error
    >>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>frequency error directly. Now in his case he is doing so by measuring the
    >>>phase, so you need at least two phase measurements to estimate the
    >>>frequency error. But you do NOT want to reduce the frequency error by
    >>>reducing the phase error-- far too slow.
    >>>
    >>>One way of reducing the frequency error is to use the ntp procedure but
    >>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>least squares fit to find the frequency, and then use that information to
    >>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>phase measurements by the change in frequency.
    >>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>
    >>>
    >>>
    >>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>
    >>>
    >>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>and capacitors. It however is a very simply Markovian process. There is far
    >>>more information in the data than that, and digititally it is easy to
    >>>impliment far more complex feedback loops than that.
    >>>
    >>>
    >>>
    >>>
    >>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>oscillator divides that down to the same frequency. The resulting two
    >>>>signals are then phase locked, by measuring the phase error on each
    >>>>cycle, low pass filtering it, and using it to control the local
    >>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>having some constant phase error.
    >>>
    >>>
    >>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>eoffset does not correspond to being off by 5Hz.
    >>>
    >>>
    >>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>calibration. Typical electronic implementations don't use it at all,
    >>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>to get initial lock.
    >>>
    >>>
    >>>And? You are claiming that that is efficient or easy? I would claim the
    >>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>not a very efficient technique-- unless of course you want the phase errors
    >>>to be zero( as ntp does, and he does not).
    >>>
    >>>
    >>>
    >>>


  5. Re: NTP phase lock loop inputs and outputs?

    "David L. Mills" writes:

    >Bill,


    >Ahem. The first point I made was that least-squares doesn't help the
    >frequency estimate. The next point you made is that least-squares
    >improves the phase estimate. The last point you made is that phase noise


    No. The point I tried to make was the least squares improved the FREQUENCY
    estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    equally spaced) at which the phase is measured. I am sorry that the way I
    phrased it could have been misunderstood.


    The phase is ALSO improved proportional to sqrt(n)
    ..
    This assumes uncorrelated phase errors dominate the error budget.



    >is not important. Our points have been made and further discussion would
    >be boring.


    Except you misunderstood my point. It may still be boring to you.


    >Dave


    >Unruh wrote:
    >> "David L. Mills" writes:
    >>
    >>
    >>>Bill,

    >>
    >>
    >>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>you need are the first and last points during the measurement interval.

    >>
    >>
    >> Well, no. If you have random phase noise, a least squares fit will improve
    >> the above estimate by roughly sqrt(n/4) where n is the number of points.
    >> That can be significant. It is certainly true that the end points have the
    >> most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >> you are better by about a factor of 4 which is not insignificant.
    >>
    >>
    >>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>directly and exponentially average successive intervals. The NTP
    >>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>Allan intercept and FLL above it and also when started without a
    >>>frequency file. The trick is to separate the phase component from the
    >>>frequency component, which requires some delicate computations. This
    >>>allows the frequency to be accurately computed as above, yet allows a
    >>>phase correction during the measurement interval.

    >>
    >>
    >> He of course is not interested in phase corrections.
    >>
    >>
    >>
    >>
    >>>Dave

    >>
    >>
    >>>Unruh wrote:
    >>>
    >>>>David Woolley writes:
    >>>>
    >>>>
    >>>>
    >>>>>Unruh wrote:
    >>>>
    >>>>
    >>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>
    >>>>
    >>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>
    >>>>
    >>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>frequency error only. He does not give a damn about the phase error
    >>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>phase, so you need at least two phase measurements to estimate the
    >>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>reducing the phase error-- far too slow.
    >>>>
    >>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>least squares fit to find the frequency, and then use that information to
    >>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>phase measurements by the change in frequency.
    >>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>
    >>>>
    >>>>
    >>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>
    >>>>
    >>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>more information in the data than that, and digititally it is easy to
    >>>>impliment far more complex feedback loops than that.
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>having some constant phase error.
    >>>>
    >>>>
    >>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>
    >>>>
    >>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>to get initial lock.
    >>>>
    >>>>
    >>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>to be zero( as ntp does, and he does not).
    >>>>
    >>>>
    >>>>
    >>>>


  6. Re: NTP phase lock loop inputs and outputs?

    Bill,

    NIST doesn't agree with you. Only the first and last are truly
    significant. Reference: Levine, J. Time synchronization over the
    Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    46(4), 888-896, 1999.

    Dave

    Unruh wrote:

    > "David L. Mills" writes:
    >
    >
    >>Bill,

    >
    >
    >>Ahem. The first point I made was that least-squares doesn't help the
    >>frequency estimate. The next point you made is that least-squares
    >>improves the phase estimate. The last point you made is that phase noise

    >
    >
    > No. The point I tried to make was the least squares improved the FREQUENCY
    > estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    > equally spaced) at which the phase is measured. I am sorry that the way I
    > phrased it could have been misunderstood.
    >
    >
    > The phase is ALSO improved proportional to sqrt(n)
    > .
    > This assumes uncorrelated phase errors dominate the error budget.
    >
    >
    >
    >
    >>is not important. Our points have been made and further discussion would
    >>be boring.

    >
    >
    > Except you misunderstood my point. It may still be boring to you.
    >
    >
    >
    >>Dave

    >
    >
    >>Unruh wrote:
    >>
    >>>"David L. Mills" writes:
    >>>
    >>>
    >>>
    >>>>Bill,
    >>>
    >>>
    >>>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>>you need are the first and last points during the measurement interval.
    >>>
    >>>
    >>>Well, no. If you have random phase noise, a least squares fit will improve
    >>>the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>That can be significant. It is certainly true that the end points have the
    >>>most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>you are better by about a factor of 4 which is not insignificant.
    >>>
    >>>
    >>>
    >>>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>directly and exponentially average successive intervals. The NTP
    >>>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>Allan intercept and FLL above it and also when started without a
    >>>>frequency file. The trick is to separate the phase component from the
    >>>>frequency component, which requires some delicate computations. This
    >>>>allows the frequency to be accurately computed as above, yet allows a
    >>>>phase correction during the measurement interval.
    >>>
    >>>
    >>>He of course is not interested in phase corrections.
    >>>
    >>>
    >>>
    >>>
    >>>
    >>>>Dave
    >>>
    >>>
    >>>>Unruh wrote:
    >>>>
    >>>>
    >>>>>David Woolley writes:
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>Unruh wrote:
    >>>>>
    >>>>>
    >>>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>
    >>>>>
    >>>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>>
    >>>>>
    >>>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>frequency error only. He does not give a damn about the phase error
    >>>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>>phase, so you need at least two phase measurements to estimate the
    >>>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>>reducing the phase error-- far too slow.
    >>>>>
    >>>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>least squares fit to find the frequency, and then use that information to
    >>>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>phase measurements by the change in frequency.
    >>>>>(t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>
    >>>>>
    >>>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>>more information in the data than that, and digititally it is easy to
    >>>>>impliment far more complex feedback loops than that.
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>>having some constant phase error.
    >>>>>
    >>>>>
    >>>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>>
    >>>>>
    >>>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>to get initial lock.
    >>>>>
    >>>>>
    >>>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>>to be zero( as ntp does, and he does not).
    >>>>>
    >>>>>
    >>>>>
    >>>>>


  7. Re: NTP phase lock loop inputs and outputs?

    "David L. Mills" writes:

    >Bill,


    >NIST doesn't agree with you. Only the first and last are truly
    >significant. Reference: Levine, J. Time synchronization over the
    >Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >46(4), 888-896, 1999.


    Well, they may be operating under different assumptions (eg that the noise
    is not dominated by independent phase errors) but if that assumption is
    correct then the results are simple and correct. IF the errors are
    dominated by 1/f noise, other assumptions may well apply, but in any case
    using more than just the first and last will always help knock down the
    phase noise. For n large, if I take the first and last to calculate the
    slope, and then take the first and last and then the second and
    penultimate, those two pieces of data are almost equivalent as far as their
    frequency estimate and 1/f estimates are concerned, but clearly the random
    phase errors in the various points are knocked down by using the two
    estimates. I will try to get ahold of that article. Do you have any
    suggestions as to an easy way to do so (eg on the net)?

    Note that that paper is NOT by NIST, but by Judah Levine who happens
    to work at NIST. That I sometimes disagree with you should not be taken to
    mean that the University of British Columbia disagrees with you.



    >Dave


    >Unruh wrote:


    >> "David L. Mills" writes:
    >>
    >>
    >>>Bill,

    >>
    >>
    >>>Ahem. The first point I made was that least-squares doesn't help the
    >>>frequency estimate. The next point you made is that least-squares
    >>>improves the phase estimate. The last point you made is that phase noise

    >>
    >>
    >> No. The point I tried to make was the least squares improved the FREQUENCY
    >> estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >> equally spaced) at which the phase is measured. I am sorry that the way I
    >> phrased it could have been misunderstood.
    >>
    >>
    >> The phase is ALSO improved proportional to sqrt(n)
    >> .
    >> This assumes uncorrelated phase errors dominate the error budget.
    >>
    >>
    >>
    >>
    >>>is not important. Our points have been made and further discussion would
    >>>be boring.

    >>
    >>
    >> Except you misunderstood my point. It may still be boring to you.
    >>
    >>
    >>
    >>>Dave

    >>
    >>
    >>>Unruh wrote:
    >>>
    >>>>"David L. Mills" writes:
    >>>>
    >>>>
    >>>>
    >>>>>Bill,
    >>>>
    >>>>
    >>>>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>you need are the first and last points during the measurement interval.
    >>>>
    >>>>
    >>>>Well, no. If you have random phase noise, a least squares fit will improve
    >>>>the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>That can be significant. It is certainly true that the end points have the
    >>>>most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>you are better by about a factor of 4 which is not insignificant.
    >>>>
    >>>>
    >>>>
    >>>>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>directly and exponentially average successive intervals. The NTP
    >>>>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>Allan intercept and FLL above it and also when started without a
    >>>>>frequency file. The trick is to separate the phase component from the
    >>>>>frequency component, which requires some delicate computations. This
    >>>>>allows the frequency to be accurately computed as above, yet allows a
    >>>>>phase correction during the measurement interval.
    >>>>
    >>>>
    >>>>He of course is not interested in phase corrections.
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>>Dave
    >>>>
    >>>>
    >>>>>Unruh wrote:
    >>>>>
    >>>>>
    >>>>>>David Woolley writes:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>Unruh wrote:
    >>>>>>
    >>>>>>
    >>>>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>
    >>>>>>
    >>>>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>
    >>>>>>
    >>>>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>frequency error only. He does not give a damn about the phase error
    >>>>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>phase, so you need at least two phase measurements to estimate the
    >>>>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>reducing the phase error-- far too slow.
    >>>>>>
    >>>>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>least squares fit to find the frequency, and then use that information to
    >>>>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>phase measurements by the change in frequency.
    >>>>>>(t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>
    >>>>>>
    >>>>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>more information in the data than that, and digititally it is easy to
    >>>>>>impliment far more complex feedback loops than that.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>having some constant phase error.
    >>>>>>
    >>>>>>
    >>>>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>>>
    >>>>>>
    >>>>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>to get initial lock.
    >>>>>>
    >>>>>>
    >>>>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>>>to be zero( as ntp does, and he does not).
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>


  8. Re: NTP phase lock loop inputs and outputs?



    You must have read a different paper than that one. I found it (through our
    library) and it says that if you have n measurements in a time period T,
    the best strategy is to take n/2 measurements at the beginning of the time
    and n/2 at the end to minimize the effect of the white noise phase error on the
    frequency estimate. That is perfectly true, and gives an error which goes
    as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    measurements, just use the first and last one to estimate the slope.

    If you have n measurements, the best estimate of the slope is to do a least
    squares fit. If they are equally spaced, the center third do not help much
    (nor do they hinder), but a least squares fit is always the best thing to
    do.


    "David L. Mills" writes:

    >Bill,


    >NIST doesn't agree with you. Only the first and last are truly
    >significant. Reference: Levine, J. Time synchronization over the
    >Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >46(4), 888-896, 1999.


    >Dave


    >Unruh wrote:


    >> "David L. Mills" writes:
    >>
    >>
    >>>Bill,

    >>
    >>
    >>>Ahem. The first point I made was that least-squares doesn't help the
    >>>frequency estimate. The next point you made is that least-squares
    >>>improves the phase estimate. The last point you made is that phase noise

    >>
    >>
    >> No. The point I tried to make was the least squares improved the FREQUENCY
    >> estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >> equally spaced) at which the phase is measured. I am sorry that the way I
    >> phrased it could have been misunderstood.
    >>
    >>
    >> The phase is ALSO improved proportional to sqrt(n)
    >> .
    >> This assumes uncorrelated phase errors dominate the error budget.
    >>
    >>
    >>
    >>
    >>>is not important. Our points have been made and further discussion would
    >>>be boring.

    >>
    >>
    >> Except you misunderstood my point. It may still be boring to you.
    >>
    >>
    >>
    >>>Dave

    >>
    >>
    >>>Unruh wrote:
    >>>
    >>>>"David L. Mills" writes:
    >>>>
    >>>>
    >>>>
    >>>>>Bill,
    >>>>
    >>>>
    >>>>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>you need are the first and last points during the measurement interval.
    >>>>
    >>>>
    >>>>Well, no. If you have random phase noise, a least squares fit will improve
    >>>>the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>That can be significant. It is certainly true that the end points have the
    >>>>most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>you are better by about a factor of 4 which is not insignificant.
    >>>>
    >>>>
    >>>>
    >>>>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>directly and exponentially average successive intervals. The NTP
    >>>>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>Allan intercept and FLL above it and also when started without a
    >>>>>frequency file. The trick is to separate the phase component from the
    >>>>>frequency component, which requires some delicate computations. This
    >>>>>allows the frequency to be accurately computed as above, yet allows a
    >>>>>phase correction during the measurement interval.
    >>>>
    >>>>
    >>>>He of course is not interested in phase corrections.
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>>Dave
    >>>>
    >>>>
    >>>>>Unruh wrote:
    >>>>>
    >>>>>
    >>>>>>David Woolley writes:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>Unruh wrote:
    >>>>>>
    >>>>>>
    >>>>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>
    >>>>>>
    >>>>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>
    >>>>>>
    >>>>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>frequency error only. He does not give a damn about the phase error
    >>>>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>phase, so you need at least two phase measurements to estimate the
    >>>>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>reducing the phase error-- far too slow.
    >>>>>>
    >>>>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>least squares fit to find the frequency, and then use that information to
    >>>>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>phase measurements by the change in frequency.
    >>>>>>(t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>
    >>>>>>
    >>>>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>more information in the data than that, and digititally it is easy to
    >>>>>>impliment far more complex feedback loops than that.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>having some constant phase error.
    >>>>>>
    >>>>>>
    >>>>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>>>
    >>>>>>
    >>>>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>to get initial lock.
    >>>>>>
    >>>>>>
    >>>>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>>>to be zero( as ntp does, and he does not).
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>


  9. Re: NTP phase lock loop inputs and outputs?

    Bill,

    Read it again. Judah takes multiple samples to reduce the phase noise,
    not to improve the frequency estimation.

    Dave

    Unruh wrote:

    > You must have read a different paper than that one. I found it (through our
    > library) and it says that if you have n measurements in a time period T,
    > the best strategy is to take n/2 measurements at the beginning of the time
    > and n/2 at the end to minimize the effect of the white noise phase error on the
    > frequency estimate. That is perfectly true, and gives an error which goes
    > as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    > measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    > measurements, just use the first and last one to estimate the slope.
    >
    > If you have n measurements, the best estimate of the slope is to do a least
    > squares fit. If they are equally spaced, the center third do not help much
    > (nor do they hinder), but a least squares fit is always the best thing to
    > do.
    >
    >
    > "David L. Mills" writes:
    >
    >
    >>Bill,

    >
    >
    >>NIST doesn't agree with you. Only the first and last are truly
    >>significant. Reference: Levine, J. Time synchronization over the
    >>Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>46(4), 888-896, 1999.

    >
    >
    >>Dave

    >
    >
    >>Unruh wrote:

    >
    >
    >>>"David L. Mills" writes:
    >>>
    >>>
    >>>
    >>>>Bill,
    >>>
    >>>
    >>>>Ahem. The first point I made was that least-squares doesn't help the
    >>>>frequency estimate. The next point you made is that least-squares
    >>>>improves the phase estimate. The last point you made is that phase noise
    >>>
    >>>
    >>>No. The point I tried to make was the least squares improved the FREQUENCY
    >>>estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>equally spaced) at which the phase is measured. I am sorry that the way I
    >>>phrased it could have been misunderstood.
    >>>
    >>>
    >>>The phase is ALSO improved proportional to sqrt(n)
    >>>.
    >>>This assumes uncorrelated phase errors dominate the error budget.
    >>>
    >>>
    >>>
    >>>
    >>>
    >>>>is not important. Our points have been made and further discussion would
    >>>>be boring.
    >>>
    >>>
    >>>Except you misunderstood my point. It may still be boring to you.
    >>>
    >>>
    >>>
    >>>
    >>>>Dave
    >>>
    >>>
    >>>>Unruh wrote:
    >>>>
    >>>>
    >>>>>"David L. Mills" writes:
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>Bill,
    >>>>>
    >>>>>
    >>>>>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>you need are the first and last points during the measurement interval.
    >>>>>
    >>>>>
    >>>>>Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>That can be significant. It is certainly true that the end points have the
    >>>>>most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>you are better by about a factor of 4 which is not insignificant.
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>directly and exponentially average successive intervals. The NTP
    >>>>>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>Allan intercept and FLL above it and also when started without a
    >>>>>>frequency file. The trick is to separate the phase component from the
    >>>>>>frequency component, which requires some delicate computations. This
    >>>>>>allows the frequency to be accurately computed as above, yet allows a
    >>>>>>phase correction during the measurement interval.
    >>>>>
    >>>>>
    >>>>>He of course is not interested in phase corrections.
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>Dave
    >>>>>
    >>>>>
    >>>>>>Unruh wrote:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>David Woolley writes:
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>Unruh wrote:
    >>>>>>>
    >>>>>>>
    >>>>>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>
    >>>>>>>
    >>>>>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>
    >>>>>>>
    >>>>>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>frequency error only. He does not give a damn about the phase error
    >>>>>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>phase, so you need at least two phase measurements to estimate the
    >>>>>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>reducing the phase error-- far too slow.
    >>>>>>>
    >>>>>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>least squares fit to find the frequency, and then use that information to
    >>>>>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>phase measurements by the change in frequency.
    >>>>>>>(t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>
    >>>>>>>
    >>>>>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>more information in the data than that, and digititally it is easy to
    >>>>>>>impliment far more complex feedback loops than that.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>having some constant phase error.
    >>>>>>>
    >>>>>>>
    >>>>>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>>>>
    >>>>>>>
    >>>>>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>to get initial lock.
    >>>>>>>
    >>>>>>>
    >>>>>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>to be zero( as ntp does, and he does not).
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>


  10. Re: NTP phase lock loop inputs and outputs?

    "David L. Mills" writes:

    >Bill,


    >Read it again. Judah takes multiple samples to reduce the phase noise,
    >not to improve the frequency estimation.


    Dave: The frequency estimate is done by subtracting two phase
    determinations. Thus the phase noise enters the frequency determination. By
    reducing the phase noise you reduce the frequency noise as well. I think
    you need to read it again, but us just telling the other to read properly
    will not help.

    The frequency estimate is obtained in NTP and in his procedure by making
    phase measurements.
    f_i= (y_i-y_{i-1})/T
    If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    variable, then delta f_i= sqrt( +)/T
    By reducing you reduce delta f_i. And as you point out, you can
    reduce by making a bunch of measurements. Those measurements can be
    all done at the end points or spread over the time interval T. The latter
    is not quite as effective in reducing delta f_i since many of the
    measurements do not have as long a "lever arm" as if they were all at the
    endpoints, and that is why uniform sampling is about sqrt(3) worse than
    clustering at the end points. But in either case, the more measurements you
    make the more you reduce the uncertainty in the frequency estimate.

    Anyway, at this point everyone else has enough information to make up their
    own mind.



    >Dave


    >Unruh wrote:


    >> You must have read a different paper than that one. I found it (through our
    >> library) and it says that if you have n measurements in a time period T,
    >> the best strategy is to take n/2 measurements at the beginning of the time
    >> and n/2 at the end to minimize the effect of the white noise phase error on the
    >> frequency estimate. That is perfectly true, and gives an error which goes
    >> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >> measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    >> measurements, just use the first and last one to estimate the slope.
    >>
    >> If you have n measurements, the best estimate of the slope is to do a least
    >> squares fit. If they are equally spaced, the center third do not help much
    >> (nor do they hinder), but a least squares fit is always the best thing to
    >> do.
    >>
    >>
    >> "David L. Mills" writes:
    >>
    >>
    >>>Bill,

    >>
    >>
    >>>NIST doesn't agree with you. Only the first and last are truly
    >>>significant. Reference: Levine, J. Time synchronization over the
    >>>Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>46(4), 888-896, 1999.

    >>
    >>
    >>>Dave

    >>
    >>
    >>>Unruh wrote:

    >>
    >>
    >>>>"David L. Mills" writes:
    >>>>
    >>>>
    >>>>
    >>>>>Bill,
    >>>>
    >>>>
    >>>>>Ahem. The first point I made was that least-squares doesn't help the
    >>>>>frequency estimate. The next point you made is that least-squares
    >>>>>improves the phase estimate. The last point you made is that phase noise
    >>>>
    >>>>
    >>>>No. The point I tried to make was the least squares improved the FREQUENCY
    >>>>estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>>equally spaced) at which the phase is measured. I am sorry that the way I
    >>>>phrased it could have been misunderstood.
    >>>>
    >>>>
    >>>>The phase is ALSO improved proportional to sqrt(n)
    >>>>.
    >>>>This assumes uncorrelated phase errors dominate the error budget.
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>>is not important. Our points have been made and further discussion would
    >>>>>be boring.
    >>>>
    >>>>
    >>>>Except you misunderstood my point. It may still be boring to you.
    >>>>
    >>>>
    >>>>
    >>>>
    >>>>>Dave
    >>>>
    >>>>
    >>>>>Unruh wrote:
    >>>>>
    >>>>>
    >>>>>>"David L. Mills" writes:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>Bill,
    >>>>>>
    >>>>>>
    >>>>>>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>>you need are the first and last points during the measurement interval.
    >>>>>>
    >>>>>>
    >>>>>>Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>>the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>>That can be significant. It is certainly true that the end points have the
    >>>>>>most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>>you are better by about a factor of 4 which is not insignificant.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>>directly and exponentially average successive intervals. The NTP
    >>>>>>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>>Allan intercept and FLL above it and also when started without a
    >>>>>>>frequency file. The trick is to separate the phase component from the
    >>>>>>>frequency component, which requires some delicate computations. This
    >>>>>>>allows the frequency to be accurately computed as above, yet allows a
    >>>>>>>phase correction during the measurement interval.
    >>>>>>
    >>>>>>
    >>>>>>He of course is not interested in phase corrections.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>Dave
    >>>>>>
    >>>>>>
    >>>>>>>Unruh wrote:
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>David Woolley writes:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>Unruh wrote:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>>frequency error only. He does not give a damn about the phase error
    >>>>>>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>>phase, so you need at least two phase measurements to estimate the
    >>>>>>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>>reducing the phase error-- far too slow.
    >>>>>>>>
    >>>>>>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>>least squares fit to find the frequency, and then use that information to
    >>>>>>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>>phase measurements by the change in frequency.
    >>>>>>>>(t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>>more information in the data than that, and digititally it is easy to
    >>>>>>>>impliment far more complex feedback loops than that.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>>having some constant phase error.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>>to get initial lock.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>>to be zero( as ntp does, and he does not).
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>


  11. Re: NTP phase lock loop inputs and outputs?

    My understanding is that least squares is optimal only when the
    residuals are white. For measurements of atomic frequency standards
    such as Rubidium or Cesium, the noise process is dominated by white
    frequency noise, and in this case linear regression yields the optimal
    estimate of frequency. A little snooping around on the NIST web site
    will provide the relevant backup info.

    For so many other precision timing and frequency applications, the noise
    processes are decidedly un-white. David Allan developed his famous
    two-sample variance to handle these divergent, non-stationary noise
    processes. For instance, quartz oscillators are dominated by flicker
    frequency noise for averaging times greater than about 100 milliseconds,
    and eventually turn to random walk at longer averaging times. Selective
    Availability of GPS (when it was in effect) was a white phase noise
    process that modulated the time transfer for un-keyed users. The
    statistics of network time transfer via ntp are undoubtedly divergent,
    but I have not seen any data that showed it to be white frequency noise
    dominant.

    So, it is not clear that linear regression is optimal for estimating the
    frequency via ntp, unless someone has determined the statistics to be
    white frequency. I personally have not performed the measurements to
    make that determination, but it would not surprise me if Judah Levine has.

    Bruce

    Unruh wrote:
    > "David L. Mills" writes:
    >
    >> Bill,

    >
    >> Read it again. Judah takes multiple samples to reduce the phase noise,
    >> not to improve the frequency estimation.

    >
    > Dave: The frequency estimate is done by subtracting two phase
    > determinations. Thus the phase noise enters the frequency determination. By
    > reducing the phase noise you reduce the frequency noise as well. I think
    > you need to read it again, but us just telling the other to read properly
    > will not help.
    >
    > The frequency estimate is obtained in NTP and in his procedure by making
    > phase measurements.
    > f_i= (y_i-y_{i-1})/T
    > If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    > variable, then delta f_i= sqrt( +)/T
    > By reducing you reduce delta f_i. And as you point out, you can
    > reduce by making a bunch of measurements. Those measurements can be
    > all done at the end points or spread over the time interval T. The latter
    > is not quite as effective in reducing delta f_i since many of the
    > measurements do not have as long a "lever arm" as if they were all at the
    > endpoints, and that is why uniform sampling is about sqrt(3) worse than
    > clustering at the end points. But in either case, the more measurements you
    > make the more you reduce the uncertainty in the frequency estimate.
    >
    > Anyway, at this point everyone else has enough information to make up their
    > own mind.
    >
    >
    >
    >> Dave

    >
    >> Unruh wrote:

    >
    >>> You must have read a different paper than that one. I found it (through our
    >>> library) and it says that if you have n measurements in a time period T,
    >>> the best strategy is to take n/2 measurements at the beginning of the time
    >>> and n/2 at the end to minimize the effect of the white noise phase error on the
    >>> frequency estimate. That is perfectly true, and gives an error which goes
    >>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>> measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    >>> measurements, just use the first and last one to estimate the slope.
    >>>
    >>> If you have n measurements, the best estimate of the slope is to do a least
    >>> squares fit. If they are equally spaced, the center third do not help much
    >>> (nor do they hinder), but a least squares fit is always the best thing to
    >>> do.
    >>>
    >>>
    >>> "David L. Mills" writes:
    >>>
    >>>
    >>>> Bill,
    >>>
    >>>> NIST doesn't agree with you. Only the first and last are truly
    >>>> significant. Reference: Levine, J. Time synchronization over the
    >>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>> 46(4), 888-896, 1999.
    >>>
    >>>> Dave
    >>>
    >>>> Unruh wrote:
    >>>
    >>>>> "David L. Mills" writes:
    >>>>>
    >>>>>
    >>>>>
    >>>>>> Bill,
    >>>>>
    >>>>>> Ahem. The first point I made was that least-squares doesn't help the
    >>>>>> frequency estimate. The next point you made is that least-squares
    >>>>>> improves the phase estimate. The last point you made is that phase noise
    >>>>>
    >>>>> No. The point I tried to make was the least squares improved the FREQUENCY
    >>>>> estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>>> equally spaced) at which the phase is measured. I am sorry that the way I
    >>>>> phrased it could have been misunderstood.
    >>>>>
    >>>>>
    >>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>> .
    >>>>> This assumes uncorrelated phase errors dominate the error budget.
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>> is not important. Our points have been made and further discussion would
    >>>>>> be boring.
    >>>>>
    >>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>> Dave
    >>>>>
    >>>>>> Unruh wrote:
    >>>>>>
    >>>>>>
    >>>>>>> "David L. Mills" writes:
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> Bill,
    >>>>>>>
    >>>>>>>> If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>>> you need are the first and last points during the measurement interval.
    >>>>>>>
    >>>>>>> Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>>> That can be significant. It is certainly true that the end points have the
    >>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>>> directly and exponentially average successive intervals. The NTP
    >>>>>>>> discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>>> Allan intercept and FLL above it and also when started without a
    >>>>>>>> frequency file. The trick is to separate the phase component from the
    >>>>>>>> frequency component, which requires some delicate computations. This
    >>>>>>>> allows the frequency to be accurately computed as above, yet allows a
    >>>>>>>> phase correction during the measurement interval.
    >>>>>>>
    >>>>>>> He of course is not interested in phase corrections.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> Dave
    >>>>>>>
    >>>>>>>> Unruh wrote:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> David Woolley writes:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Unruh wrote:
    >>>>>>>>>
    >>>>>>>>>>> I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>>>
    >>>>>>>>>> Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>>> way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>>>
    >>>>>>>>> Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>>> phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>>> the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>>> frequency error only. He does not give a damn about the phase error
    >>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>>> the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>>> frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>>> phase, so you need at least two phase measurements to estimate the
    >>>>>>>>> frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>
    >>>>>>>>> One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>>> applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>>> error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>>> least squares fit to find the frequency, and then use that information to
    >>>>>>>>> drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> (PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>>> doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>>>
    >>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>>> and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>>> more information in the data than that, and digititally it is easy to
    >>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>>> spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>>> oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>>> signals are then phase locked, by measuring the phase error on each
    >>>>>>>>>> cycle, low pass filtering it, and using it to control the local
    >>>>>>>>>> oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>>> having some constant phase error.
    >>>>>>>>>
    >>>>>>>>>>> the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>
    >>>>>>>>>> ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>>> calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>>> but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>>> to get initial lock.
    >>>>>>>>>
    >>>>>>>>> And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>>> latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>>> about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>>> driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>>> not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>


  12. Re: NTP phase lock loop inputs and outputs?

    Bruce writes:

    >My understanding is that least squares is optimal only when the
    >residuals are white. For measurements of atomic frequency standards


    Yes, and I clearly made that assumption. And the assumption is generally
    true for short enough time intervals. If the netwrok delays are really
    really short or if the time over which you are determining the frequency is
    very long, then that is a bad assumption.


    >such as Rubidium or Cesium, the noise process is dominated by white
    >frequency noise, and in this case linear regression yields the optimal
    >estimate of frequency. A little snooping around on the NIST web site
    >will provide the relevant backup info.


    >For so many other precision timing and frequency applications, the noise
    >processes are decidedly un-white. David Allan developed his famous
    >two-sample variance to handle these divergent, non-stationary noise
    >processes. For instance, quartz oscillators are dominated by flicker
    >frequency noise for averaging times greater than about 100 milliseconds,
    >and eventually turn to random walk at longer averaging times. Selective
    >Availability of GPS (when it was in effect) was a white phase noise
    >process that modulated the time transfer for un-keyed users. The
    >statistics of network time transfer via ntp are undoubtedly divergent,
    >but I have not seen any data that showed it to be white frequency noise
    >dominant.


    All the data suggests that it is white and the explicit assumption of that
    Levine paper is that it is white.



    >So, it is not clear that linear regression is optimal for estimating the
    >frequency via ntp, unless someone has determined the statistics to be
    >white frequency. I personally have not performed the measurements to
    >make that determination, but it would not surprise me if Judah Levine has.


    And he explicitly assumes that the network delay noise is white. His whole
    procedure is to make a large ( eg 25-50) of ntp type measurements at one
    time (withing seconds) then wait a long time ( 1/4 of a day) and doing it
    again. He estimates teh frequency by averaging the measurements at any one
    time, and then using that average phase error to determine the frequency.

    The number of measurements at one instant is determined by requiring that
    the frequency error due to the white noise ( decreases as sqrt(n)) is equal
    to the other errors (flicker noise, etc) or equals the pre determined error
    level wanted.




    >Bruce


    >Unruh wrote:
    >> "David L. Mills" writes:
    >>
    >>> Bill,

    >>
    >>> Read it again. Judah takes multiple samples to reduce the phase noise,
    >>> not to improve the frequency estimation.

    >>
    >> Dave: The frequency estimate is done by subtracting two phase
    >> determinations. Thus the phase noise enters the frequency determination. By
    >> reducing the phase noise you reduce the frequency noise as well. I think
    >> you need to read it again, but us just telling the other to read properly
    >> will not help.
    >>
    >> The frequency estimate is obtained in NTP and in his procedure by making
    >> phase measurements.
    >> f_i= (y_i-y_{i-1})/T
    >> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    >> variable, then delta f_i= sqrt( +)/T
    >> By reducing you reduce delta f_i. And as you point out, you can
    >> reduce by making a bunch of measurements. Those measurements can be
    >> all done at the end points or spread over the time interval T. The latter
    >> is not quite as effective in reducing delta f_i since many of the
    >> measurements do not have as long a "lever arm" as if they were all at the
    >> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >> clustering at the end points. But in either case, the more measurements you
    >> make the more you reduce the uncertainty in the frequency estimate.
    >>
    >> Anyway, at this point everyone else has enough information to make up their
    >> own mind.
    >>
    >>
    >>
    >>> Dave

    >>
    >>> Unruh wrote:

    >>
    >>>> You must have read a different paper than that one. I found it (through our
    >>>> library) and it says that if you have n measurements in a time period T,
    >>>> the best strategy is to take n/2 measurements at the beginning of the time
    >>>> and n/2 at the end to minimize the effect of the white noise phase error on the
    >>>> frequency estimate. That is perfectly true, and gives an error which goes
    >>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>>> measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    >>>> measurements, just use the first and last one to estimate the slope.
    >>>>
    >>>> If you have n measurements, the best estimate of the slope is to do a least
    >>>> squares fit. If they are equally spaced, the center third do not help much
    >>>> (nor do they hinder), but a least squares fit is always the best thing to
    >>>> do.
    >>>>
    >>>>
    >>>> "David L. Mills" writes:
    >>>>
    >>>>
    >>>>> Bill,
    >>>>
    >>>>> NIST doesn't agree with you. Only the first and last are truly
    >>>>> significant. Reference: Levine, J. Time synchronization over the
    >>>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>>> 46(4), 888-896, 1999.
    >>>>
    >>>>> Dave
    >>>>
    >>>>> Unruh wrote:
    >>>>
    >>>>>> "David L. Mills" writes:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>> Bill,
    >>>>>>
    >>>>>>> Ahem. The first point I made was that least-squares doesn't help the
    >>>>>>> frequency estimate. The next point you made is that least-squares
    >>>>>>> improves the phase estimate. The last point you made is that phase noise
    >>>>>>
    >>>>>> No. The point I tried to make was the least squares improved the FREQUENCY
    >>>>>> estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>>>> equally spaced) at which the phase is measured. I am sorry that the way I
    >>>>>> phrased it could have been misunderstood.
    >>>>>>
    >>>>>>
    >>>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>>> .
    >>>>>> This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>> is not important. Our points have been made and further discussion would
    >>>>>>> be boring.
    >>>>>>
    >>>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>> Dave
    >>>>>>
    >>>>>>> Unruh wrote:
    >>>>>>>
    >>>>>>>
    >>>>>>>> "David L. Mills" writes:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Bill,
    >>>>>>>>
    >>>>>>>>> If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>>>> you need are the first and last points during the measurement interval.
    >>>>>>>>
    >>>>>>>> Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>>>> That can be significant. It is certainly true that the end points have the
    >>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>>>> directly and exponentially average successive intervals. The NTP
    >>>>>>>>> discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>>>> Allan intercept and FLL above it and also when started without a
    >>>>>>>>> frequency file. The trick is to separate the phase component from the
    >>>>>>>>> frequency component, which requires some delicate computations. This
    >>>>>>>>> allows the frequency to be accurately computed as above, yet allows a
    >>>>>>>>> phase correction during the measurement interval.
    >>>>>>>>
    >>>>>>>> He of course is not interested in phase corrections.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Dave
    >>>>>>>>
    >>>>>>>>> Unruh wrote:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> David Woolley writes:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>
    >>>>>>>>>>>> I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>>>>
    >>>>>>>>>>> Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>>>> way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>>>>
    >>>>>>>>>> Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>>>> phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>>>> the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>>>> frequency error only. He does not give a damn about the phase error
    >>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>>>> the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>>>> frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>>>> phase, so you need at least two phase measurements to estimate the
    >>>>>>>>>> frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>>
    >>>>>>>>>> One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>>>> applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>>>> least squares fit to find the frequency, and then use that information to
    >>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> (PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>>>> doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>>>>
    >>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>>>> and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>>>> more information in the data than that, and digititally it is easy to
    >>>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>>>> spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>>>> oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>>>> signals are then phase locked, by measuring the phase error on each
    >>>>>>>>>>> cycle, low pass filtering it, and using it to control the local
    >>>>>>>>>>> oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>>>> having some constant phase error.
    >>>>>>>>>>
    >>>>>>>>>>>> the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>
    >>>>>>>>>>> ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>>>> calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>>>> but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>>>> to get initial lock.
    >>>>>>>>>>
    >>>>>>>>>> And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>>>> about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>>>> driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>>>> not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>


  13. Re: NTP phase lock loop inputs and outputs?

    The only thing that you can conclude from the referenced paper is that
    for very short measurement intervals, order of a second, the noise
    process is white phase. Obviously a simple average is optimal to
    determine the time from a group of such closely spaced measurements.

    The paper then describes an algorithm to adapt to the decidedly
    non-stationary measurement noise and local oscillator noise processes as
    the averaging time is increased. Flicker and random walk of frequency
    are mentioned, as well as white frequency. However he attributes this
    to the crystal oscillator, which I believe is a mistake. Crystal
    oscillators do not exhibit white frequency noise over longer averaging
    times. If they did, we probably would not need Rubidium oscillators.

    All of these multiple noise processes operating simultaneously and
    completely uncorrelated with each other are what make the precision time
    transfer and frequency control art so interesting and allow so many
    different ideas for how to best set the clock for the particular
    application.

    Bruce

    Unruh wrote:
    > Bruce writes:
    >
    >> My understanding is that least squares is optimal only when the
    >> residuals are white. For measurements of atomic frequency standards

    >
    > Yes, and I clearly made that assumption. And the assumption is generally
    > true for short enough time intervals. If the netwrok delays are really
    > really short or if the time over which you are determining the frequency is
    > very long, then that is a bad assumption.
    >
    >
    >> such as Rubidium or Cesium, the noise process is dominated by white
    >> frequency noise, and in this case linear regression yields the optimal
    >> estimate of frequency. A little snooping around on the NIST web site
    >> will provide the relevant backup info.

    >
    >> For so many other precision timing and frequency applications, the noise
    >> processes are decidedly un-white. David Allan developed his famous
    >> two-sample variance to handle these divergent, non-stationary noise
    >> processes. For instance, quartz oscillators are dominated by flicker
    >> frequency noise for averaging times greater than about 100 milliseconds,
    >> and eventually turn to random walk at longer averaging times. Selective
    >> Availability of GPS (when it was in effect) was a white phase noise
    >> process that modulated the time transfer for un-keyed users. The
    >> statistics of network time transfer via ntp are undoubtedly divergent,
    >> but I have not seen any data that showed it to be white frequency noise
    >> dominant.

    >
    > All the data suggests that it is white and the explicit assumption of that
    > Levine paper is that it is white.
    >
    >
    >
    >> So, it is not clear that linear regression is optimal for estimating the
    >> frequency via ntp, unless someone has determined the statistics to be
    >> white frequency. I personally have not performed the measurements to
    >> make that determination, but it would not surprise me if Judah Levine has.

    >
    > And he explicitly assumes that the network delay noise is white. His whole
    > procedure is to make a large ( eg 25-50) of ntp type measurements at one
    > time (withing seconds) then wait a long time ( 1/4 of a day) and doing it
    > again. He estimates teh frequency by averaging the measurements at any one
    > time, and then using that average phase error to determine the frequency.
    >
    > The number of measurements at one instant is determined by requiring that
    > the frequency error due to the white noise ( decreases as sqrt(n)) is equal
    > to the other errors (flicker noise, etc) or equals the pre determined error
    > level wanted.
    >
    >
    >
    >
    >> Bruce

    >
    >> Unruh wrote:
    >>> "David L. Mills" writes:
    >>>
    >>>> Bill,
    >>>> Read it again. Judah takes multiple samples to reduce the phase noise,
    >>>> not to improve the frequency estimation.
    >>> Dave: The frequency estimate is done by subtracting two phase
    >>> determinations. Thus the phase noise enters the frequency determination. By
    >>> reducing the phase noise you reduce the frequency noise as well. I think
    >>> you need to read it again, but us just telling the other to read properly
    >>> will not help.
    >>>
    >>> The frequency estimate is obtained in NTP and in his procedure by making
    >>> phase measurements.
    >>> f_i= (y_i-y_{i-1})/T
    >>> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    >>> variable, then delta f_i= sqrt( +)/T
    >>> By reducing you reduce delta f_i. And as you point out, you can
    >>> reduce by making a bunch of measurements. Those measurements can be
    >>> all done at the end points or spread over the time interval T. The latter
    >>> is not quite as effective in reducing delta f_i since many of the
    >>> measurements do not have as long a "lever arm" as if they were all at the
    >>> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >>> clustering at the end points. But in either case, the more measurements you
    >>> make the more you reduce the uncertainty in the frequency estimate.
    >>>
    >>> Anyway, at this point everyone else has enough information to make up their
    >>> own mind.
    >>>
    >>>
    >>>
    >>>> Dave
    >>>> Unruh wrote:
    >>>>> You must have read a different paper than that one. I found it (through our
    >>>>> library) and it says that if you have n measurements in a time period T,
    >>>>> the best strategy is to take n/2 measurements at the beginning of the time
    >>>>> and n/2 at the end to minimize the effect of the white noise phase error on the
    >>>>> frequency estimate. That is perfectly true, and gives an error which goes
    >>>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>>>> measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    >>>>> measurements, just use the first and last one to estimate the slope.
    >>>>>
    >>>>> If you have n measurements, the best estimate of the slope is to do a least
    >>>>> squares fit. If they are equally spaced, the center third do not help much
    >>>>> (nor do they hinder), but a least squares fit is always the best thing to
    >>>>> do.
    >>>>>
    >>>>>
    >>>>> "David L. Mills" writes:
    >>>>>
    >>>>>
    >>>>>> Bill,
    >>>>>> NIST doesn't agree with you. Only the first and last are truly
    >>>>>> significant. Reference: Levine, J. Time synchronization over the
    >>>>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>>>> 46(4), 888-896, 1999.
    >>>>>> Dave
    >>>>>> Unruh wrote:
    >>>>>>> "David L. Mills" writes:
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> Bill,
    >>>>>>>> Ahem. The first point I made was that least-squares doesn't help the
    >>>>>>>> frequency estimate. The next point you made is that least-squares
    >>>>>>>> improves the phase estimate. The last point you made is that phase noise
    >>>>>>> No. The point I tried to make was the least squares improved the FREQUENCY
    >>>>>>> estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>>>>> equally spaced) at which the phase is measured. I am sorry that the way I
    >>>>>>> phrased it could have been misunderstood.
    >>>>>>>
    >>>>>>>
    >>>>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>>>> .
    >>>>>>> This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> is not important. Our points have been made and further discussion would
    >>>>>>>> be boring.
    >>>>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> Dave
    >>>>>>>> Unruh wrote:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> "David L. Mills" writes:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Bill,
    >>>>>>>>>> If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>>>>> you need are the first and last points during the measurement interval.
    >>>>>>>>> Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>>>>> That can be significant. It is certainly true that the end points have the
    >>>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>>>>> directly and exponentially average successive intervals. The NTP
    >>>>>>>>>> discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>>>>> Allan intercept and FLL above it and also when started without a
    >>>>>>>>>> frequency file. The trick is to separate the phase component from the
    >>>>>>>>>> frequency component, which requires some delicate computations. This
    >>>>>>>>>> allows the frequency to be accurately computed as above, yet allows a
    >>>>>>>>>> phase correction during the measurement interval.
    >>>>>>>>> He of course is not interested in phase corrections.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Dave
    >>>>>>>>>> Unruh wrote:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> David Woolley writes:
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>>> I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>>>>>> Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>>>>> way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>>>>> Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>>>>> phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>>>>> the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>>>>> frequency error only. He does not give a damn about the phase error
    >>>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>>>>> the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>>>>> frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>>>>> phase, so you need at least two phase measurements to estimate the
    >>>>>>>>>>> frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>>>
    >>>>>>>>>>> One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>>>>> applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>>>>> least squares fit to find the frequency, and then use that information to
    >>>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> (PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>>>>> doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>>>>> and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>>>>> more information in the data than that, and digititally it is easy to
    >>>>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>>>>> spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>>>>> oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>>>>> signals are then phase locked, by measuring the phase error on each
    >>>>>>>>>>>> cycle, low pass filtering it, and using it to control the local
    >>>>>>>>>>>> oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>>>>> having some constant phase error.
    >>>>>>>>>>>>> the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>>> ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>>>>> calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>>>>> but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>>>>> to get initial lock.
    >>>>>>>>>>> And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>>>>> about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>>>>> driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>>>>> not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>


  14. Re: NTP phase lock loop inputs and outputs?

    Bruce writes:

    >The only thing that you can conclude from the referenced paper is that
    >for very short measurement intervals, order of a second, the noise
    >process is white phase. Obviously a simple average is optimal to
    >determine the time from a group of such closely spaced measurements.


    It is not the time that is of interest in this discussion, it is the
    frequency estimate. The frequency is estimated from the time measurements.
    Mills claimed that there was no advantage to doing a least squares fit to
    the sequence of time measurements, but that using only the end points was
    always as good as a fit. I diputed that, and stated that IF the noise in
    the time measuements was independent gaussian noise, fitting was always
    better. If the noise is say 1/f noise (ie longer than the Allan minimum)
    then there is no advantage I agree, but then using the endpoints is also
    bad because, although you get a longer lever arm, you also get more noise
    as well.


    >The paper then describes an algorithm to adapt to the decidedly
    >non-stationary measurement noise and local oscillator noise processes as
    >the averaging time is increased. Flicker and random walk of frequency
    >are mentioned, as well as white frequency. However he attributes this
    >to the crystal oscillator, which I believe is a mistake. Crystal
    >oscillators do not exhibit white frequency noise over longer averaging
    >times. If they did, we probably would not need Rubidium oscillators.


    They do, but the flicker and random walk dominate if the time is long
    enough.


    >All of these multiple noise processes operating simultaneously and
    >completely uncorrelated with each other are what make the precision time
    >transfer and frequency control art so interesting and allow so many
    >different ideas for how to best set the clock for the particular
    >application.


    What is interesting about his comments is that doing a whole sequence of
    very rapid time measurements and then long periods of quiet is better than
    equally spacing the measurements. HOwever this would almost certainly
    trigger the Kiss of Death and get the systems very upset with you. (eg make
    10 measuments withing 2 sec say, and then wait for a day, rather than one
    every couple of hours is better he would claim) But if a server running ntp
    were hit with 10 rapid fire measurements from one machine would it
    complain?



    >Bruce


    >Unruh wrote:
    >> Bruce writes:
    >>
    >>> My understanding is that least squares is optimal only when the
    >>> residuals are white. For measurements of atomic frequency standards

    >>
    >> Yes, and I clearly made that assumption. And the assumption is generally
    >> true for short enough time intervals. If the netwrok delays are really
    >> really short or if the time over which you are determining the frequency is
    >> very long, then that is a bad assumption.
    >>
    >>
    >>> such as Rubidium or Cesium, the noise process is dominated by white
    >>> frequency noise, and in this case linear regression yields the optimal
    >>> estimate of frequency. A little snooping around on the NIST web site
    >>> will provide the relevant backup info.

    >>
    >>> For so many other precision timing and frequency applications, the noise
    >>> processes are decidedly un-white. David Allan developed his famous
    >>> two-sample variance to handle these divergent, non-stationary noise
    >>> processes. For instance, quartz oscillators are dominated by flicker
    >>> frequency noise for averaging times greater than about 100 milliseconds,
    >>> and eventually turn to random walk at longer averaging times. Selective
    >>> Availability of GPS (when it was in effect) was a white phase noise
    >>> process that modulated the time transfer for un-keyed users. The
    >>> statistics of network time transfer via ntp are undoubtedly divergent,
    >>> but I have not seen any data that showed it to be white frequency noise
    >>> dominant.

    >>
    >> All the data suggests that it is white and the explicit assumption of that
    >> Levine paper is that it is white.
    >>
    >>
    >>
    >>> So, it is not clear that linear regression is optimal for estimating the
    >>> frequency via ntp, unless someone has determined the statistics to be
    >>> white frequency. I personally have not performed the measurements to
    >>> make that determination, but it would not surprise me if Judah Levine has.

    >>
    >> And he explicitly assumes that the network delay noise is white. His whole
    >> procedure is to make a large ( eg 25-50) of ntp type measurements at one
    >> time (withing seconds) then wait a long time ( 1/4 of a day) and doing it
    >> again. He estimates teh frequency by averaging the measurements at any one
    >> time, and then using that average phase error to determine the frequency.
    >>
    >> The number of measurements at one instant is determined by requiring that
    >> the frequency error due to the white noise ( decreases as sqrt(n)) is equal
    >> to the other errors (flicker noise, etc) or equals the pre determined error
    >> level wanted.
    >>
    >>
    >>
    >>
    >>> Bruce

    >>
    >>> Unruh wrote:
    >>>> "David L. Mills" writes:
    >>>>
    >>>>> Bill,
    >>>>> Read it again. Judah takes multiple samples to reduce the phase noise,
    >>>>> not to improve the frequency estimation.
    >>>> Dave: The frequency estimate is done by subtracting two phase
    >>>> determinations. Thus the phase noise enters the frequency determination. By
    >>>> reducing the phase noise you reduce the frequency noise as well. I think
    >>>> you need to read it again, but us just telling the other to read properly
    >>>> will not help.
    >>>>
    >>>> The frequency estimate is obtained in NTP and in his procedure by making
    >>>> phase measurements.
    >>>> f_i= (y_i-y_{i-1})/T
    >>>> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    >>>> variable, then delta f_i= sqrt( +)/T
    >>>> By reducing you reduce delta f_i. And as you point out, you can
    >>>> reduce by making a bunch of measurements. Those measurements can be
    >>>> all done at the end points or spread over the time interval T. The latter
    >>>> is not quite as effective in reducing delta f_i since many of the
    >>>> measurements do not have as long a "lever arm" as if they were all at the
    >>>> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >>>> clustering at the end points. But in either case, the more measurements you
    >>>> make the more you reduce the uncertainty in the frequency estimate.
    >>>>
    >>>> Anyway, at this point everyone else has enough information to make up their
    >>>> own mind.
    >>>>
    >>>>
    >>>>
    >>>>> Dave
    >>>>> Unruh wrote:
    >>>>>> You must have read a different paper than that one. I found it (through our
    >>>>>> library) and it says that if you have n measurements in a time period T,
    >>>>>> the best strategy is to take n/2 measurements at the beginning of the time
    >>>>>> and n/2 at the end to minimize the effect of the white noise phase error on the
    >>>>>> frequency estimate. That is perfectly true, and gives an error which goes
    >>>>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>>>>> measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    >>>>>> measurements, just use the first and last one to estimate the slope.
    >>>>>>
    >>>>>> If you have n measurements, the best estimate of the slope is to do a least
    >>>>>> squares fit. If they are equally spaced, the center third do not help much
    >>>>>> (nor do they hinder), but a least squares fit is always the best thing to
    >>>>>> do.
    >>>>>>
    >>>>>>
    >>>>>> "David L. Mills" writes:
    >>>>>>
    >>>>>>
    >>>>>>> Bill,
    >>>>>>> NIST doesn't agree with you. Only the first and last are truly
    >>>>>>> significant. Reference: Levine, J. Time synchronization over the
    >>>>>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>>>>> 46(4), 888-896, 1999.
    >>>>>>> Dave
    >>>>>>> Unruh wrote:
    >>>>>>>> "David L. Mills" writes:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Bill,
    >>>>>>>>> Ahem. The first point I made was that least-squares doesn't help the
    >>>>>>>>> frequency estimate. The next point you made is that least-squares
    >>>>>>>>> improves the phase estimate. The last point you made is that phase noise
    >>>>>>>> No. The point I tried to make was the least squares improved the FREQUENCY
    >>>>>>>> estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>>>>>> equally spaced) at which the phase is measured. I am sorry that the way I
    >>>>>>>> phrased it could have been misunderstood.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>>>>> .
    >>>>>>>> This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> is not important. Our points have been made and further discussion would
    >>>>>>>>> be boring.
    >>>>>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Dave
    >>>>>>>>> Unruh wrote:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> "David L. Mills" writes:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> Bill,
    >>>>>>>>>>> If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>>>>>> you need are the first and last points during the measurement interval.
    >>>>>>>>>> Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>>>>>> That can be significant. It is certainly true that the end points have the
    >>>>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>>>>>> directly and exponentially average successive intervals. The NTP
    >>>>>>>>>>> discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>>>>>> Allan intercept and FLL above it and also when started without a
    >>>>>>>>>>> frequency file. The trick is to separate the phase component from the
    >>>>>>>>>>> frequency component, which requires some delicate computations. This
    >>>>>>>>>>> allows the frequency to be accurately computed as above, yet allows a
    >>>>>>>>>>> phase correction during the measurement interval.
    >>>>>>>>>> He of course is not interested in phase corrections.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> Dave
    >>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> David Woolley writes:
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>>>> I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>>>>>>> Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>>>>>> way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>>>>>> Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>>>>>> phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>>>>>> the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>>>>>> frequency error only. He does not give a damn about the phase error
    >>>>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>>>>>> the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>>>>>> frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>>>>>> phase, so you need at least two phase measurements to estimate the
    >>>>>>>>>>>> frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>>>>
    >>>>>>>>>>>> One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>>>>>> applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>>>>>> least squares fit to find the frequency, and then use that information to
    >>>>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>> (PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>>>>>> doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>>>>>> and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>>>>>> more information in the data than that, and digititally it is easy to
    >>>>>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>> or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>>>>>> spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>>>>>> oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>>>>>> signals are then phase locked, by measuring the phase error on each
    >>>>>>>>>>>>> cycle, low pass filtering it, and using it to control the local
    >>>>>>>>>>>>> oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>>>>>> having some constant phase error.
    >>>>>>>>>>>>>> the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>>>> ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>>>>>> calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>>>>>> but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>>>>>> to get initial lock.
    >>>>>>>>>>>> And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>>>>>> about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>>>>>> driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>>>>>> not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>


  15. Re: NTP phase lock loop inputs and outputs?

    Bill,

    No. You are averaging an average. You are trying to bait me with further
    discussion. I am done.

    Dave

    Unruh wrote:

    > "David L. Mills" writes:
    >
    >
    >>Bill,

    >
    >
    >>Read it again. Judah takes multiple samples to reduce the phase noise,
    >>not to improve the frequency estimation.

    >
    >
    > Dave: The frequency estimate is done by subtracting two phase
    > determinations. Thus the phase noise enters the frequency determination. By
    > reducing the phase noise you reduce the frequency noise as well. I think
    > you need to read it again, but us just telling the other to read properly
    > will not help.
    >
    > The frequency estimate is obtained in NTP and in his procedure by making
    > phase measurements.
    > f_i= (y_i-y_{i-1})/T
    > If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    > variable, then delta f_i= sqrt( +)/T
    > By reducing you reduce delta f_i. And as you point out, you can
    > reduce by making a bunch of measurements. Those measurements can be
    > all done at the end points or spread over the time interval T. The latter
    > is not quite as effective in reducing delta f_i since many of the
    > measurements do not have as long a "lever arm" as if they were all at the
    > endpoints, and that is why uniform sampling is about sqrt(3) worse than
    > clustering at the end points. But in either case, the more measurements you
    > make the more you reduce the uncertainty in the frequency estimate.
    >
    > Anyway, at this point everyone else has enough information to make up their
    > own mind.
    >
    >
    >
    >
    >>Dave

    >
    >
    >>Unruh wrote:

    >
    >
    >>>You must have read a different paper than that one. I found it (through our
    >>>library) and it says that if you have n measurements in a time period T,
    >>>the best strategy is to take n/2 measurements at the beginning of the time
    >>>and n/2 at the end to minimize the effect of the white noise phase error on the
    >>>frequency estimate. That is perfectly true, and gives an error which goes
    >>>as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>>measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    >>>measurements, just use the first and last one to estimate the slope.
    >>>
    >>>If you have n measurements, the best estimate of the slope is to do a least
    >>>squares fit. If they are equally spaced, the center third do not help much
    >>>(nor do they hinder), but a least squares fit is always the best thing to
    >>>do.
    >>>
    >>>
    >>>"David L. Mills" writes:
    >>>
    >>>
    >>>
    >>>>Bill,
    >>>
    >>>
    >>>>NIST doesn't agree with you. Only the first and last are truly
    >>>>significant. Reference: Levine, J. Time synchronization over the
    >>>>Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>>46(4), 888-896, 1999.
    >>>
    >>>
    >>>>Dave
    >>>
    >>>
    >>>>Unruh wrote:
    >>>
    >>>
    >>>>>"David L. Mills" writes:
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>Bill,
    >>>>>
    >>>>>
    >>>>>>Ahem. The first point I made was that least-squares doesn't help the
    >>>>>>frequency estimate. The next point you made is that least-squares
    >>>>>>improves the phase estimate. The last point you made is that phase noise
    >>>>>
    >>>>>
    >>>>>No. The point I tried to make was the least squares improved the FREQUENCY
    >>>>>estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>>>equally spaced) at which the phase is measured. I am sorry that the way I
    >>>>>phrased it could have been misunderstood.
    >>>>>
    >>>>>
    >>>>>The phase is ALSO improved proportional to sqrt(n)
    >>>>>.
    >>>>>This assumes uncorrelated phase errors dominate the error budget.
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>is not important. Our points have been made and further discussion would
    >>>>>>be boring.
    >>>>>
    >>>>>
    >>>>>Except you misunderstood my point. It may still be boring to you.
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>Dave
    >>>>>
    >>>>>
    >>>>>>Unruh wrote:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>"David L. Mills" writes:
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>Bill,
    >>>>>>>
    >>>>>>>
    >>>>>>>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>>>you need are the first and last points during the measurement interval.
    >>>>>>>
    >>>>>>>
    >>>>>>>Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>>>the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>>>That can be significant. It is certainly true that the end points have the
    >>>>>>>most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>>>you are better by about a factor of 4 which is not insignificant.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>>>directly and exponentially average successive intervals. The NTP
    >>>>>>>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>>>Allan intercept and FLL above it and also when started without a
    >>>>>>>>frequency file. The trick is to separate the phase component from the
    >>>>>>>>frequency component, which requires some delicate computations. This
    >>>>>>>>allows the frequency to be accurately computed as above, yet allows a
    >>>>>>>>phase correction during the measurement interval.
    >>>>>>>
    >>>>>>>
    >>>>>>>He of course is not interested in phase corrections.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>Dave
    >>>>>>>
    >>>>>>>
    >>>>>>>>Unruh wrote:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>David Woolley writes:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>Unruh wrote:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>>>frequency error only. He does not give a damn about the phase error
    >>>>>>>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>>>phase, so you need at least two phase measurements to estimate the
    >>>>>>>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>>>reducing the phase error-- far too slow.
    >>>>>>>>>
    >>>>>>>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>>>least squares fit to find the frequency, and then use that information to
    >>>>>>>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>>>phase measurements by the change in frequency.
    >>>>>>>>>(t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>>>more information in the data than that, and digititally it is easy to
    >>>>>>>>>impliment far more complex feedback loops than that.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>>>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>>>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>>>having some constant phase error.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>>>to get initial lock.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>>>to be zero( as ntp does, and he does not).
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>


  16. Re: NTP phase lock loop inputs and outputs?

    Bruce,

    Straigt arrow. That's why I have been hollering about the Allan
    intercept. The characteristics of the heavy-tail distribution in actual
    NTP measurements are in my book. The bottom line is that in most
    real-world NTP configurations for WANs the real enemy is not phase
    noise, but frequency noise with H parameters considerably greater than 0.5.

    Dave

    Bruce wrote:

    > My understanding is that least squares is optimal only when the
    > residuals are white. For measurements of atomic frequency standards
    > such as Rubidium or Cesium, the noise process is dominated by white
    > frequency noise, and in this case linear regression yields the optimal
    > estimate of frequency. A little snooping around on the NIST web site
    > will provide the relevant backup info.
    >
    > For so many other precision timing and frequency applications, the noise
    > processes are decidedly un-white. David Allan developed his famous
    > two-sample variance to handle these divergent, non-stationary noise
    > processes. For instance, quartz oscillators are dominated by flicker
    > frequency noise for averaging times greater than about 100 milliseconds,
    > and eventually turn to random walk at longer averaging times. Selective
    > Availability of GPS (when it was in effect) was a white phase noise
    > process that modulated the time transfer for un-keyed users. The
    > statistics of network time transfer via ntp are undoubtedly divergent,
    > but I have not seen any data that showed it to be white frequency noise
    > dominant.
    >
    > So, it is not clear that linear regression is optimal for estimating the
    > frequency via ntp, unless someone has determined the statistics to be
    > white frequency. I personally have not performed the measurements to
    > make that determination, but it would not surprise me if Judah Levine has.
    >
    > Bruce
    >
    > Unruh wrote:
    >
    >> "David L. Mills" writes:
    >>
    >>> Bill,

    >>
    >>
    >>> Read it again. Judah takes multiple samples to reduce the phase
    >>> noise, not to improve the frequency estimation.

    >>
    >>
    >> Dave: The frequency estimate is done by subtracting two phase
    >> determinations. Thus the phase noise enters the frequency
    >> determination. By
    >> reducing the phase noise you reduce the frequency noise as well. I think
    >> you need to read it again, but us just telling the other to read properly
    >> will not help.
    >> The frequency estimate is obtained in NTP and in his procedure by making
    >> phase measurements. f_i= (y_i-y_{i-1})/T
    >> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    >> variable, then delta f_i= sqrt( +)/T
    >> By reducing you reduce delta f_i. And as you point out, you can
    >> reduce by making a bunch of measurements. Those measurements
    >> can be
    >> all done at the end points or spread over the time interval T. The latter
    >> is not quite as effective in reducing delta f_i since many of the
    >> measurements do not have as long a "lever arm" as if they were all at the
    >> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >> clustering at the end points. But in either case, the more
    >> measurements you
    >> make the more you reduce the uncertainty in the frequency estimate.
    >> Anyway, at this point everyone else has enough information to make up
    >> their
    >> own mind.
    >>
    >>
    >>> Dave

    >>
    >>
    >>> Unruh wrote:

    >>
    >>
    >>>> You must have read a different paper than that one. I found it
    >>>> (through our
    >>>> library) and it says that if you have n measurements in a time
    >>>> period T,
    >>>> the best strategy is to take n/2 measurements at the beginning of
    >>>> the time
    >>>> and n/2 at the end to minimize the effect of the white noise phase
    >>>> error on the
    >>>> frequency estimate. That is perfectly true, and gives an error which
    >>>> goes
    >>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>>> measurements (assuming large n) T is the total time interval and
    >>>> delta is the std dev of each phase measurement . But it certainly
    >>>> does NOT say that if you have n
    >>>> measurements, just use the first and last one to estimate the slope.
    >>>> If you have n measurements, the best estimate of the slope is to do
    >>>> a least
    >>>> squares fit. If they are equally spaced, the center third do not
    >>>> help much
    >>>> (nor do they hinder), but a least squares fit is always the best
    >>>> thing to
    >>>> do.
    >>>>
    >>>> "David L. Mills" writes:
    >>>>
    >>>>
    >>>>> Bill,
    >>>>
    >>>>
    >>>>> NIST doesn't agree with you. Only the first and last are truly
    >>>>> significant. Reference: Levine, J. Time synchronization over the
    >>>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>>> 46(4), 888-896, 1999.
    >>>>
    >>>>
    >>>>> Dave
    >>>>
    >>>>
    >>>>> Unruh wrote:
    >>>>
    >>>>
    >>>>>> "David L. Mills" writes:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>> Bill,
    >>>>>>
    >>>>>>
    >>>>>>> Ahem. The first point I made was that least-squares doesn't help
    >>>>>>> the frequency estimate. The next point you made is that
    >>>>>>> least-squares improves the phase estimate. The last point you
    >>>>>>> made is that phase noise
    >>>>>>
    >>>>>>
    >>>>>> No. The point I tried to make was the least squares improved the
    >>>>>> FREQUENCY estimate by sqrt(n/6) for large n, where n is the number
    >>>>>> of points (assumed
    >>>>>> equally spaced) at which the phase is measured. I am sorry that
    >>>>>> the way I
    >>>>>> phrased it could have been misunderstood.
    >>>>>>
    >>>>>>
    >>>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>>> . This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>> is not important. Our points have been made and further
    >>>>>>> discussion would be boring.
    >>>>>>
    >>>>>>
    >>>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>> Dave
    >>>>>>
    >>>>>>
    >>>>>>> Unruh wrote:
    >>>>>>>
    >>>>>>>
    >>>>>>>> "David L. Mills" writes:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Bill,
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> If you need only the frequency, least-squares doesn't help a
    >>>>>>>>> lot; all you need are the first and last points during the
    >>>>>>>>> measurement interval.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>> Well, no. If you have random phase noise, a least squares fit
    >>>>>>>> will improve
    >>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of
    >>>>>>>> points.
    >>>>>>>> That can be significant. It is certainly true that the end
    >>>>>>>> points have the
    >>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have
    >>>>>>>> 64 points,
    >>>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the
    >>>>>>>>> frequency directly and exponentially average successive
    >>>>>>>>> intervals. The NTP discipline is in fact a hybrid PLL/FLL where
    >>>>>>>>> the PLL dominates below the Allan intercept and FLL above it
    >>>>>>>>> and also when started without a frequency file. The trick is to
    >>>>>>>>> separate the phase component from the frequency component,
    >>>>>>>>> which requires some delicate computations. This allows the
    >>>>>>>>> frequency to be accurately computed as above, yet allows a
    >>>>>>>>> phase correction during the measurement interval.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>> He of course is not interested in phase corrections.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Dave
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Unruh wrote:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> David Woolley writes:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>> I do not understand this. You seem to be measuring the
    >>>>>>>>>>>> offsets, not the
    >>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do
    >>>>>>>>>>>> is to measure
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> Measuring phase error to control frequency is pretty much THE
    >>>>>>>>>>> standard way of doing it in modern electronics. It's called
    >>>>>>>>>>> a phase locked loop
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>> Sure. In the case of ntp you want to have zero phase error.
    >>>>>>>>>> ntp reduces the
    >>>>>>>>>> phase error slowly by changing the frequency. This has the
    >>>>>>>>>> advantage that
    >>>>>>>>>> the frequency error also gets reduced (slowly). He wants to
    >>>>>>>>>> reduce the
    >>>>>>>>>> frequency error only. He does not give a damn about the phase
    >>>>>>>>>> error
    >>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error
    >>>>>>>>>> by attacking
    >>>>>>>>>> the phase error. That is a slow way of doing it. You want to
    >>>>>>>>>> estimate the
    >>>>>>>>>> frequency error directly. Now in his case he is doing so by
    >>>>>>>>>> measuring the
    >>>>>>>>>> phase, so you need at least two phase measurements to estimate
    >>>>>>>>>> the
    >>>>>>>>>> frequency error. But you do NOT want to reduce the frequency
    >>>>>>>>>> error by
    >>>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>> One way of reducing the frequency error is to use the ntp
    >>>>>>>>>> procedure but
    >>>>>>>>>> applied to the frequency. But you must feed in an estimate of
    >>>>>>>>>> the frequecy
    >>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase
    >>>>>>>>>> points, do a
    >>>>>>>>>> least squares fit to find the frequency, and then use that
    >>>>>>>>>> information to
    >>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct
    >>>>>>>>>> the prior
    >>>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> (PLL) and it is getting difficult to find any piece of
    >>>>>>>>>>> electrnics that doesn't include one these days. E.g. the
    >>>>>>>>>>> typical digitally tuned radio
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A
    >>>>>>>>>> few resistors
    >>>>>>>>>> and capacitors. It however is a very simply Markovian process.
    >>>>>>>>>> There is far
    >>>>>>>>>> more information in the data than that, and digititally it is
    >>>>>>>>>> easy to
    >>>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> or TV has a crystal oscillator, which is divided down to the
    >>>>>>>>>>> channel spacing or a sub-multiple, and a configurable divider
    >>>>>>>>>>> on the local oscillator divides that down to the same
    >>>>>>>>>>> frequency. The resulting two signals are then phase locked,
    >>>>>>>>>>> by measuring the phase error on each cycle, low pass
    >>>>>>>>>>> filtering it, and using it to control the local oscillator
    >>>>>>>>>>> frequency, resulting in their matching in frequency, and
    >>>>>>>>>>> having some constant phase error.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>> the offset twice, and ask if the difference is constant or
    >>>>>>>>>>>> not. Ie, th
    >>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> ntpd only uses this method on a cold start, to get the
    >>>>>>>>>>> initial coarse calibration. Typical electronic
    >>>>>>>>>>> implementations don't use it at all, but either do a
    >>>>>>>>>>> frequency sweep or simply open up the low pass filter, to get
    >>>>>>>>>>> initial lock.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>> And? You are claiming that that is efficient or easy? I would
    >>>>>>>>>> claim the
    >>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He
    >>>>>>>>>> does not care
    >>>>>>>>>> about the phase errors. He is onlyconcerned about the
    >>>>>>>>>> frequency errors.
    >>>>>>>>>> driving the frequency errors to zero by driving the phase
    >>>>>>>>>> errors to zero is
    >>>>>>>>>> not a very efficient technique-- unless of course you want the
    >>>>>>>>>> phase errors
    >>>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>


  17. Re: NTP phase lock loop inputs and outputs?

    Bruce,

    Judah and I have come around the issue of white/flicker frequency noise
    with cold rocks (grotty computer oscillators). Determining accurate
    Allan deviation plots is something of a black art and at hazard to
    insiduous resonances. I found with careful technique, as documented in
    previous papers and my book, that phase noise and frequency noise
    characteristics of cold rocks are generally separable, mainly because
    the flicker noise is so bad. Therefore, the intersection of the phase
    characteristic and frequency characteristic is generally well defined,
    which I call the Allan intercept. This is why the NTP discipline is
    mainly responsive to phase noise below the intercept and frequency noise
    above.

    Dave

    Bruce wrote:

    > The only thing that you can conclude from the referenced paper is that
    > for very short measurement intervals, order of a second, the noise
    > process is white phase. Obviously a simple average is optimal to
    > determine the time from a group of such closely spaced measurements.
    >
    > The paper then describes an algorithm to adapt to the decidedly
    > non-stationary measurement noise and local oscillator noise processes as
    > the averaging time is increased. Flicker and random walk of frequency
    > are mentioned, as well as white frequency. However he attributes this
    > to the crystal oscillator, which I believe is a mistake. Crystal
    > oscillators do not exhibit white frequency noise over longer averaging
    > times. If they did, we probably would not need Rubidium oscillators.
    >
    > All of these multiple noise processes operating simultaneously and
    > completely uncorrelated with each other are what make the precision time
    > transfer and frequency control art so interesting and allow so many
    > different ideas for how to best set the clock for the particular
    > application.
    >
    > Bruce
    >
    > Unruh wrote:
    >
    >> Bruce writes:
    >>
    >>> My understanding is that least squares is optimal only when the
    >>> residuals are white. For measurements of atomic frequency standards

    >>
    >>
    >> Yes, and I clearly made that assumption. And the assumption is generally
    >> true for short enough time intervals. If the netwrok delays are really
    >> really short or if the time over which you are determining the
    >> frequency is
    >> very long, then that is a bad assumption.
    >>
    >>
    >>> such as Rubidium or Cesium, the noise process is dominated by white
    >>> frequency noise, and in this case linear regression yields the
    >>> optimal estimate of frequency. A little snooping around on the NIST
    >>> web site will provide the relevant backup info.

    >>
    >>
    >>> For so many other precision timing and frequency applications, the
    >>> noise processes are decidedly un-white. David Allan developed his
    >>> famous two-sample variance to handle these divergent, non-stationary
    >>> noise processes. For instance, quartz oscillators are dominated by
    >>> flicker frequency noise for averaging times greater than about 100
    >>> milliseconds, and eventually turn to random walk at longer averaging
    >>> times. Selective Availability of GPS (when it was in effect) was a
    >>> white phase noise process that modulated the time transfer for
    >>> un-keyed users. The statistics of network time transfer via ntp are
    >>> undoubtedly divergent, but I have not seen any data that showed it to
    >>> be white frequency noise dominant.

    >>
    >>
    >> All the data suggests that it is white and the explicit assumption of
    >> that
    >> Levine paper is that it is white.
    >>
    >>
    >>> So, it is not clear that linear regression is optimal for estimating
    >>> the frequency via ntp, unless someone has determined the statistics
    >>> to be white frequency. I personally have not performed the
    >>> measurements to make that determination, but it would not surprise me
    >>> if Judah Levine has.

    >>
    >>
    >> And he explicitly assumes that the network delay noise is white. His
    >> whole
    >> procedure is to make a large ( eg 25-50) of ntp type measurements at one
    >> time (withing seconds) then wait a long time ( 1/4 of a day) and doing it
    >> again. He estimates teh frequency by averaging the measurements at any
    >> one
    >> time, and then using that average phase error to determine the frequency.
    >>
    >> The number of measurements at one instant is determined by requiring that
    >> the frequency error due to the white noise ( decreases as sqrt(n)) is
    >> equal
    >> to the other errors (flicker noise, etc) or equals the pre determined
    >> error
    >> level wanted.
    >>
    >>
    >>
    >>> Bruce

    >>
    >>
    >>> Unruh wrote:
    >>>
    >>>> "David L. Mills" writes:
    >>>>
    >>>>> Bill,
    >>>>> Read it again. Judah takes multiple samples to reduce the phase
    >>>>> noise, not to improve the frequency estimation.
    >>>>
    >>>> Dave: The frequency estimate is done by subtracting two phase
    >>>> determinations. Thus the phase noise enters the frequency
    >>>> determination. By
    >>>> reducing the phase noise you reduce the frequency noise as well. I
    >>>> think
    >>>> you need to read it again, but us just telling the other to read
    >>>> properly
    >>>> will not help.
    >>>> The frequency estimate is obtained in NTP and in his procedure by
    >>>> making
    >>>> phase measurements. f_i= (y_i-y_{i-1})/T
    >>>> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian
    >>>> random
    >>>> variable, then delta f_i= sqrt( +)/T
    >>>> By reducing you reduce delta f_i. And as you point out, you can
    >>>> reduce by making a bunch of measurements. Those measurements
    >>>> can be
    >>>> all done at the end points or spread over the time interval T. The
    >>>> latter
    >>>> is not quite as effective in reducing delta f_i since many of the
    >>>> measurements do not have as long a "lever arm" as if they were all
    >>>> at the
    >>>> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >>>> clustering at the end points. But in either case, the more
    >>>> measurements you
    >>>> make the more you reduce the uncertainty in the frequency estimate.
    >>>> Anyway, at this point everyone else has enough information to make
    >>>> up their
    >>>> own mind.
    >>>>
    >>>>
    >>>>> Dave
    >>>>> Unruh wrote:
    >>>>>
    >>>>>> You must have read a different paper than that one. I found it
    >>>>>> (through our
    >>>>>> library) and it says that if you have n measurements in a time
    >>>>>> period T,
    >>>>>> the best strategy is to take n/2 measurements at the beginning of
    >>>>>> the time
    >>>>>> and n/2 at the end to minimize the effect of the white noise phase
    >>>>>> error on the
    >>>>>> frequency estimate. That is perfectly true, and gives an error
    >>>>>> which goes
    >>>>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally
    >>>>>> spaced
    >>>>>> measurements (assuming large n) T is the total time interval and
    >>>>>> delta is the std dev of each phase measurement . But it certainly
    >>>>>> does NOT say that if you have n
    >>>>>> measurements, just use the first and last one to estimate the slope.
    >>>>>> If you have n measurements, the best estimate of the slope is to
    >>>>>> do a least
    >>>>>> squares fit. If they are equally spaced, the center third do not
    >>>>>> help much
    >>>>>> (nor do they hinder), but a least squares fit is always the best
    >>>>>> thing to
    >>>>>> do.
    >>>>>>
    >>>>>> "David L. Mills" writes:
    >>>>>>
    >>>>>>
    >>>>>>> Bill,
    >>>>>>> NIST doesn't agree with you. Only the first and last are truly
    >>>>>>> significant. Reference: Levine, J. Time synchronization over the
    >>>>>>> Internet using an adaptive frequency locked loop. IEEE Trans.
    >>>>>>> UFFC, 46(4), 888-896, 1999.
    >>>>>>> Dave
    >>>>>>> Unruh wrote:
    >>>>>>>
    >>>>>>>> "David L. Mills" writes:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Bill,
    >>>>>>>>> Ahem. The first point I made was that least-squares doesn't
    >>>>>>>>> help the frequency estimate. The next point you made is that
    >>>>>>>>> least-squares improves the phase estimate. The last point you
    >>>>>>>>> made is that phase noise
    >>>>>>>>
    >>>>>>>> No. The point I tried to make was the least squares improved the
    >>>>>>>> FREQUENCY estimate by sqrt(n/6) for large n, where n is the
    >>>>>>>> number of points (assumed
    >>>>>>>> equally spaced) at which the phase is measured. I am sorry that
    >>>>>>>> the way I
    >>>>>>>> phrased it could have been misunderstood.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>>>>> . This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> is not important. Our points have been made and further
    >>>>>>>>> discussion would be boring.
    >>>>>>>>
    >>>>>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> Dave
    >>>>>>>>> Unruh wrote:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> "David L. Mills" writes:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> Bill,
    >>>>>>>>>>> If you need only the frequency, least-squares doesn't help a
    >>>>>>>>>>> lot; all you need are the first and last points during the
    >>>>>>>>>>> measurement interval.
    >>>>>>>>>>
    >>>>>>>>>> Well, no. If you have random phase noise, a least squares fit
    >>>>>>>>>> will improve
    >>>>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number
    >>>>>>>>>> of points.
    >>>>>>>>>> That can be significant. It is certainly true that the end
    >>>>>>>>>> points have the
    >>>>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have
    >>>>>>>>>> 64 points,
    >>>>>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the
    >>>>>>>>>>> frequency directly and exponentially average successive
    >>>>>>>>>>> intervals. The NTP discipline is in fact a hybrid PLL/FLL
    >>>>>>>>>>> where the PLL dominates below the Allan intercept and FLL
    >>>>>>>>>>> above it and also when started without a frequency file. The
    >>>>>>>>>>> trick is to separate the phase component from the frequency
    >>>>>>>>>>> component, which requires some delicate computations. This
    >>>>>>>>>>> allows the frequency to be accurately computed as above, yet
    >>>>>>>>>>> allows a phase correction during the measurement interval.
    >>>>>>>>>>
    >>>>>>>>>> He of course is not interested in phase corrections.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> Dave
    >>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> David Woolley writes:
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>> I do not understand this. You seem to be measuring the
    >>>>>>>>>>>>>> offsets, not the
    >>>>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do
    >>>>>>>>>>>>>> is to measure
    >>>>>>>>>>>>>
    >>>>>>>>>>>>> Measuring phase error to control frequency is pretty much
    >>>>>>>>>>>>> THE standard way of doing it in modern electronics. It's
    >>>>>>>>>>>>> called a phase locked loop
    >>>>>>>>>>>>
    >>>>>>>>>>>> Sure. In the case of ntp you want to have zero phase error.
    >>>>>>>>>>>> ntp reduces the
    >>>>>>>>>>>> phase error slowly by changing the frequency. This has the
    >>>>>>>>>>>> advantage that
    >>>>>>>>>>>> the frequency error also gets reduced (slowly). He wants to
    >>>>>>>>>>>> reduce the
    >>>>>>>>>>>> frequency error only. He does not give a damn about the
    >>>>>>>>>>>> phase error
    >>>>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy
    >>>>>>>>>>>> error by attacking
    >>>>>>>>>>>> the phase error. That is a slow way of doing it. You want to
    >>>>>>>>>>>> estimate the
    >>>>>>>>>>>> frequency error directly. Now in his case he is doing so by
    >>>>>>>>>>>> measuring the
    >>>>>>>>>>>> phase, so you need at least two phase measurements to
    >>>>>>>>>>>> estimate the
    >>>>>>>>>>>> frequency error. But you do NOT want to reduce the frequency
    >>>>>>>>>>>> error by
    >>>>>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>>>> One way of reducing the frequency error is to use the ntp
    >>>>>>>>>>>> procedure but
    >>>>>>>>>>>> applied to the frequency. But you must feed in an estimate
    >>>>>>>>>>>> of the frequecy
    >>>>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase
    >>>>>>>>>>>> points, do a
    >>>>>>>>>>>> least squares fit to find the frequency, and then use that
    >>>>>>>>>>>> information to
    >>>>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct
    >>>>>>>>>>>> the prior
    >>>>>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>> (PLL) and it is getting difficult to find any piece of
    >>>>>>>>>>>>> electrnics that doesn't include one these days. E.g. the
    >>>>>>>>>>>>> typical digitally tuned radio
    >>>>>>>>>>>>
    >>>>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A
    >>>>>>>>>>>> few resistors
    >>>>>>>>>>>> and capacitors. It however is a very simply Markovian
    >>>>>>>>>>>> process. There is far
    >>>>>>>>>>>> more information in the data than that, and digititally it
    >>>>>>>>>>>> is easy to
    >>>>>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>> or TV has a crystal oscillator, which is divided down to
    >>>>>>>>>>>>> the channel spacing or a sub-multiple, and a configurable
    >>>>>>>>>>>>> divider on the local oscillator divides that down to the
    >>>>>>>>>>>>> same frequency. The resulting two signals are then phase
    >>>>>>>>>>>>> locked, by measuring the phase error on each cycle, low
    >>>>>>>>>>>>> pass filtering it, and using it to control the local
    >>>>>>>>>>>>> oscillator frequency, resulting in their matching in
    >>>>>>>>>>>>> frequency, and having some constant phase error.
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>> the offset twice, and ask if the difference is constant or
    >>>>>>>>>>>>>> not. Ie, th
    >>>>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>>>>
    >>>>>>>>>>>>> ntpd only uses this method on a cold start, to get the
    >>>>>>>>>>>>> initial coarse calibration. Typical electronic
    >>>>>>>>>>>>> implementations don't use it at all, but either do a
    >>>>>>>>>>>>> frequency sweep or simply open up the low pass filter, to
    >>>>>>>>>>>>> get initial lock.
    >>>>>>>>>>>>
    >>>>>>>>>>>> And? You are claiming that that is efficient or easy? I
    >>>>>>>>>>>> would claim the
    >>>>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He
    >>>>>>>>>>>> does not care
    >>>>>>>>>>>> about the phase errors. He is onlyconcerned about the
    >>>>>>>>>>>> frequency errors.
    >>>>>>>>>>>> driving the frequency errors to zero by driving the phase
    >>>>>>>>>>>> errors to zero is
    >>>>>>>>>>>> not a very efficient technique-- unless of course you want
    >>>>>>>>>>>> the phase errors
    >>>>>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>


  18. Re: NTP phase lock loop inputs and outputs?

    "David L. Mills" writes:

    >Bill,


    >No. You are averaging an average. You are trying to bait me with further
    >discussion. I am done.


    What? What average of an average am I doing?
    Given N observations of the phase, what is the best frequency estimate. You
    claimed "use only the two measurments at the end points>" I claim "use the
    least squares fit to the frequency". You claimed Levine supported you. I
    claim Levine does exactly what I advocate, but also advocates that all the
    N measurements be taken at the max time separation (which I agree gives a
    better frequency estimate) He does NOT advocate simply taking the furthest
    two measurements.

    And what am I trying to bait you with? You certainly have the right to stop
    discussion. That does not change the fact that your original claim was
    either wrong, or was phrased in such a way as to lead me to believe you
    were claiming something you were not. But that latter interpretation falls
    since you have never corrected my interpretation of what you said.

    To repeat, the best frequency estimate given N measurements of the time
    over a time period T is to take the least squares estimate of the slope,
    assuming that the noise is modeled by independent noise source (ie not 1/f
    noise).

    >Dave


    >Unruh wrote:


    >> "David L. Mills" writes:
    >>
    >>
    >>>Bill,

    >>
    >>
    >>>Read it again. Judah takes multiple samples to reduce the phase noise,
    >>>not to improve the frequency estimation.

    >>
    >>
    >> Dave: The frequency estimate is done by subtracting two phase
    >> determinations. Thus the phase noise enters the frequency determination. By
    >> reducing the phase noise you reduce the frequency noise as well. I think
    >> you need to read it again, but us just telling the other to read properly
    >> will not help.
    >>
    >> The frequency estimate is obtained in NTP and in his procedure by making
    >> phase measurements.
    >> f_i= (y_i-y_{i-1})/T
    >> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    >> variable, then delta f_i= sqrt( +)/T
    >> By reducing you reduce delta f_i. And as you point out, you can
    >> reduce by making a bunch of measurements. Those measurements can be
    >> all done at the end points or spread over the time interval T. The latter
    >> is not quite as effective in reducing delta f_i since many of the
    >> measurements do not have as long a "lever arm" as if they were all at the
    >> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >> clustering at the end points. But in either case, the more measurements you
    >> make the more you reduce the uncertainty in the frequency estimate.
    >>
    >> Anyway, at this point everyone else has enough information to make up their
    >> own mind.
    >>
    >>
    >>
    >>
    >>>Dave

    >>
    >>
    >>>Unruh wrote:

    >>
    >>
    >>>>You must have read a different paper than that one. I found it (through our
    >>>>library) and it says that if you have n measurements in a time period T,
    >>>>the best strategy is to take n/2 measurements at the beginning of the time
    >>>>and n/2 at the end to minimize the effect of the white noise phase error on the
    >>>>frequency estimate. That is perfectly true, and gives an error which goes
    >>>>as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>>>measurements (assuming large n) T is the total time interval and delta is the std dev of each phase measurement . But it certainly does NOT say that if you have n
    >>>>measurements, just use the first and last one to estimate the slope.
    >>>>
    >>>>If you have n measurements, the best estimate of the slope is to do a least
    >>>>squares fit. If they are equally spaced, the center third do not help much
    >>>>(nor do they hinder), but a least squares fit is always the best thing to
    >>>>do.
    >>>>
    >>>>
    >>>>"David L. Mills" writes:
    >>>>
    >>>>
    >>>>
    >>>>>Bill,
    >>>>
    >>>>
    >>>>>NIST doesn't agree with you. Only the first and last are truly
    >>>>>significant. Reference: Levine, J. Time synchronization over the
    >>>>>Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>>>46(4), 888-896, 1999.
    >>>>
    >>>>
    >>>>>Dave
    >>>>
    >>>>
    >>>>>Unruh wrote:
    >>>>
    >>>>
    >>>>>>"David L. Mills" writes:
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>Bill,
    >>>>>>
    >>>>>>
    >>>>>>>Ahem. The first point I made was that least-squares doesn't help the
    >>>>>>>frequency estimate. The next point you made is that least-squares
    >>>>>>>improves the phase estimate. The last point you made is that phase noise
    >>>>>>
    >>>>>>
    >>>>>>No. The point I tried to make was the least squares improved the FREQUENCY
    >>>>>>estimate by sqrt(n/6) for large n, where n is the number of points (assumed
    >>>>>>equally spaced) at which the phase is measured. I am sorry that the way I
    >>>>>>phrased it could have been misunderstood.
    >>>>>>
    >>>>>>
    >>>>>>The phase is ALSO improved proportional to sqrt(n)
    >>>>>>.
    >>>>>>This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>is not important. Our points have been made and further discussion would
    >>>>>>>be boring.
    >>>>>>
    >>>>>>
    >>>>>>Except you misunderstood my point. It may still be boring to you.
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>
    >>>>>>>Dave
    >>>>>>
    >>>>>>
    >>>>>>>Unruh wrote:
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>>"David L. Mills" writes:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>Bill,
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>If you need only the frequency, least-squares doesn't help a lot; all
    >>>>>>>>>you need are the first and last points during the measurement interval.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>Well, no. If you have random phase noise, a least squares fit will improve
    >>>>>>>>the above estimate by roughly sqrt(n/4) where n is the number of points.
    >>>>>>>>That can be significant. It is certainly true that the end points have the
    >>>>>>>>most weight ( which is why the factor of 1/4). Ie, if you have 64 points,
    >>>>>>>>you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>The NIST LOCKCLOCK and nptd FLL disciplines compute the frequency
    >>>>>>>>>directly and exponentially average successive intervals. The NTP
    >>>>>>>>>discipline is in fact a hybrid PLL/FLL where the PLL dominates below the
    >>>>>>>>>Allan intercept and FLL above it and also when started without a
    >>>>>>>>>frequency file. The trick is to separate the phase component from the
    >>>>>>>>>frequency component, which requires some delicate computations. This
    >>>>>>>>>allows the frequency to be accurately computed as above, yet allows a
    >>>>>>>>>phase correction during the measurement interval.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>He of course is not interested in phase corrections.
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>Dave
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>>Unruh wrote:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>>David Woolley writes:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>Unruh wrote:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>>I do not understand this. You seem to be measuring the offsets, not the
    >>>>>>>>>>>>frequencies. The offset is irrelevant. What you want to do is to measure
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>Measuring phase error to control frequency is pretty much THE standard
    >>>>>>>>>>>way of doing it in modern electronics. It's called a phase locked loop
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>Sure. In the case of ntp you want to have zero phase error. ntp reduces the
    >>>>>>>>>>phase error slowly by changing the frequency. This has the advantage that
    >>>>>>>>>>the frequency error also gets reduced (slowly). He wants to reduce the
    >>>>>>>>>>frequency error only. He does not give a damn about the phase error
    >>>>>>>>>>apparently. Thus you do NOT want to reduce the frequecy error by attacking
    >>>>>>>>>>the phase error. That is a slow way of doing it. You want to estimate the
    >>>>>>>>>>frequency error directly. Now in his case he is doing so by measuring the
    >>>>>>>>>>phase, so you need at least two phase measurements to estimate the
    >>>>>>>>>>frequency error. But you do NOT want to reduce the frequency error by
    >>>>>>>>>>reducing the phase error-- far too slow.
    >>>>>>>>>>
    >>>>>>>>>>One way of reducing the frequency error is to use the ntp procedure but
    >>>>>>>>>>applied to the frequency. But you must feed in an estimate of the frequecy
    >>>>>>>>>>error. Anothr way is the chrony technique. -- collect phase points, do a
    >>>>>>>>>>least squares fit to find the frequency, and then use that information to
    >>>>>>>>>>drive the frequecy to zero. To reuse past data, also correct the prior
    >>>>>>>>>>phase measurements by the change in frequency.
    >>>>>>>>>>(t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>(PLL) and it is getting difficult to find any piece of electrnics that
    >>>>>>>>>>>doesn't include one these days. E.g. the typical digitally tuned radio
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>A PLL is a dirt simply thing to impliment electronically. A few resistors
    >>>>>>>>>>and capacitors. It however is a very simply Markovian process. There is far
    >>>>>>>>>>more information in the data than that, and digititally it is easy to
    >>>>>>>>>>impliment far more complex feedback loops than that.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>or TV has a crystal oscillator, which is divided down to the channel
    >>>>>>>>>>>spacing or a sub-multiple, and a configurable divider on the local
    >>>>>>>>>>>oscillator divides that down to the same frequency. The resulting two
    >>>>>>>>>>>signals are then phase locked, by measuring the phase error on each
    >>>>>>>>>>>cycle, low pass filtering it, and using it to control the local
    >>>>>>>>>>>oscillator frequency, resulting in their matching in frequency, and
    >>>>>>>>>>>having some constant phase error.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>>the offset twice, and ask if the difference is constant or not. Ie, th
    >>>>>>>>>>>>eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>>ntpd only uses this method on a cold start, to get the initial coarse
    >>>>>>>>>>>calibration. Typical electronic implementations don't use it at all,
    >>>>>>>>>>>but either do a frequency sweep or simply open up the low pass filter,
    >>>>>>>>>>>to get initial lock.
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>And? You are claiming that that is efficient or easy? I would claim the
    >>>>>>>>>>latter. And his requirements are NOT ntp's requirements. He does not care
    >>>>>>>>>>about the phase errors. He is onlyconcerned about the frequency errors.
    >>>>>>>>>>driving the frequency errors to zero by driving the phase errors to zero is
    >>>>>>>>>>not a very efficient technique-- unless of course you want the phase errors
    >>>>>>>>>>to be zero( as ntp does, and he does not).
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>


  19. Re: NTP phase lock loop inputs and outputs?

    "David L. Mills" writes:

    >Bruce,


    >Straigt arrow. That's why I have been hollering about the Allan
    >intercept. The characteristics of the heavy-tail distribution in actual
    >NTP measurements are in my book. The bottom line is that in most
    >real-world NTP configurations for WANs the real enemy is not phase
    >noise, but frequency noise with H parameters considerably greater than 0.5.


    No. YOu have NOT been hollering about the Allan intercept. IF the time is
    much larger than the Allan intercept, then I agree that the least squares
    is not the best. If it is smaller, then it is.



    >Dave


    >Bruce wrote:


    >> My understanding is that least squares is optimal only when the
    >> residuals are white. For measurements of atomic frequency standards
    >> such as Rubidium or Cesium, the noise process is dominated by white
    >> frequency noise, and in this case linear regression yields the optimal
    >> estimate of frequency. A little snooping around on the NIST web site
    >> will provide the relevant backup info.
    >>
    >> For so many other precision timing and frequency applications, the noise
    >> processes are decidedly un-white. David Allan developed his famous
    >> two-sample variance to handle these divergent, non-stationary noise
    >> processes. For instance, quartz oscillators are dominated by flicker
    >> frequency noise for averaging times greater than about 100 milliseconds,
    >> and eventually turn to random walk at longer averaging times. Selective
    >> Availability of GPS (when it was in effect) was a white phase noise
    >> process that modulated the time transfer for un-keyed users. The
    >> statistics of network time transfer via ntp are undoubtedly divergent,
    >> but I have not seen any data that showed it to be white frequency noise
    >> dominant.
    >>
    >> So, it is not clear that linear regression is optimal for estimating the
    >> frequency via ntp, unless someone has determined the statistics to be
    >> white frequency. I personally have not performed the measurements to
    >> make that determination, but it would not surprise me if Judah Levine has.
    >>
    >> Bruce
    >>
    >> Unruh wrote:
    >>
    >>> "David L. Mills" writes:
    >>>
    >>>> Bill,
    >>>
    >>>
    >>>> Read it again. Judah takes multiple samples to reduce the phase
    >>>> noise, not to improve the frequency estimation.
    >>>
    >>>
    >>> Dave: The frequency estimate is done by subtracting two phase
    >>> determinations. Thus the phase noise enters the frequency
    >>> determination. By
    >>> reducing the phase noise you reduce the frequency noise as well. I think
    >>> you need to read it again, but us just telling the other to read properly
    >>> will not help.
    >>> The frequency estimate is obtained in NTP and in his procedure by making
    >>> phase measurements. f_i= (y_i-y_{i-1})/T
    >>> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
    >>> variable, then delta f_i= sqrt( +)/T
    >>> By reducing you reduce delta f_i. And as you point out, you can
    >>> reduce by making a bunch of measurements. Those measurements
    >>> can be
    >>> all done at the end points or spread over the time interval T. The latter
    >>> is not quite as effective in reducing delta f_i since many of the
    >>> measurements do not have as long a "lever arm" as if they were all at the
    >>> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >>> clustering at the end points. But in either case, the more
    >>> measurements you
    >>> make the more you reduce the uncertainty in the frequency estimate.
    >>> Anyway, at this point everyone else has enough information to make up
    >>> their
    >>> own mind.
    >>>
    >>>
    >>>> Dave
    >>>
    >>>
    >>>> Unruh wrote:
    >>>
    >>>
    >>>>> You must have read a different paper than that one. I found it
    >>>>> (through our
    >>>>> library) and it says that if you have n measurements in a time
    >>>>> period T,
    >>>>> the best strategy is to take n/2 measurements at the beginning of
    >>>>> the time
    >>>>> and n/2 at the end to minimize the effect of the white noise phase
    >>>>> error on the
    >>>>> frequency estimate. That is perfectly true, and gives an error which
    >>>>> goes
    >>>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
    >>>>> measurements (assuming large n) T is the total time interval and
    >>>>> delta is the std dev of each phase measurement . But it certainly
    >>>>> does NOT say that if you have n
    >>>>> measurements, just use the first and last one to estimate the slope.
    >>>>> If you have n measurements, the best estimate of the slope is to do
    >>>>> a least
    >>>>> squares fit. If they are equally spaced, the center third do not
    >>>>> help much
    >>>>> (nor do they hinder), but a least squares fit is always the best
    >>>>> thing to
    >>>>> do.
    >>>>>
    >>>>> "David L. Mills" writes:
    >>>>>
    >>>>>
    >>>>>> Bill,
    >>>>>
    >>>>>
    >>>>>> NIST doesn't agree with you. Only the first and last are truly
    >>>>>> significant. Reference: Levine, J. Time synchronization over the
    >>>>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
    >>>>>> 46(4), 888-896, 1999.
    >>>>>
    >>>>>
    >>>>>> Dave
    >>>>>
    >>>>>
    >>>>>> Unruh wrote:
    >>>>>
    >>>>>
    >>>>>>> "David L. Mills" writes:
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> Bill,
    >>>>>>>
    >>>>>>>
    >>>>>>>> Ahem. The first point I made was that least-squares doesn't help
    >>>>>>>> the frequency estimate. The next point you made is that
    >>>>>>>> least-squares improves the phase estimate. The last point you
    >>>>>>>> made is that phase noise
    >>>>>>>
    >>>>>>>
    >>>>>>> No. The point I tried to make was the least squares improved the
    >>>>>>> FREQUENCY estimate by sqrt(n/6) for large n, where n is the number
    >>>>>>> of points (assumed
    >>>>>>> equally spaced) at which the phase is measured. I am sorry that
    >>>>>>> the way I
    >>>>>>> phrased it could have been misunderstood.
    >>>>>>>
    >>>>>>>
    >>>>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>>>> . This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> is not important. Our points have been made and further
    >>>>>>>> discussion would be boring.
    >>>>>>>
    >>>>>>>
    >>>>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>>>
    >>>>>>>
    >>>>>>>
    >>>>>>>> Dave
    >>>>>>>
    >>>>>>>
    >>>>>>>> Unruh wrote:
    >>>>>>>>
    >>>>>>>>
    >>>>>>>>> "David L. Mills" writes:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Bill,
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> If you need only the frequency, least-squares doesn't help a
    >>>>>>>>>> lot; all you need are the first and last points during the
    >>>>>>>>>> measurement interval.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>> Well, no. If you have random phase noise, a least squares fit
    >>>>>>>>> will improve
    >>>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of
    >>>>>>>>> points.
    >>>>>>>>> That can be significant. It is certainly true that the end
    >>>>>>>>> points have the
    >>>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have
    >>>>>>>>> 64 points,
    >>>>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the
    >>>>>>>>>> frequency directly and exponentially average successive
    >>>>>>>>>> intervals. The NTP discipline is in fact a hybrid PLL/FLL where
    >>>>>>>>>> the PLL dominates below the Allan intercept and FLL above it
    >>>>>>>>>> and also when started without a frequency file. The trick is to
    >>>>>>>>>> separate the phase component from the frequency component,
    >>>>>>>>>> which requires some delicate computations. This allows the
    >>>>>>>>>> frequency to be accurately computed as above, yet allows a
    >>>>>>>>>> phase correction during the measurement interval.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>> He of course is not interested in phase corrections.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Dave
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Unruh wrote:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> David Woolley writes:
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>>> I do not understand this. You seem to be measuring the
    >>>>>>>>>>>>> offsets, not the
    >>>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do
    >>>>>>>>>>>>> is to measure
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> Measuring phase error to control frequency is pretty much THE
    >>>>>>>>>>>> standard way of doing it in modern electronics. It's called
    >>>>>>>>>>>> a phase locked loop
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>> Sure. In the case of ntp you want to have zero phase error.
    >>>>>>>>>>> ntp reduces the
    >>>>>>>>>>> phase error slowly by changing the frequency. This has the
    >>>>>>>>>>> advantage that
    >>>>>>>>>>> the frequency error also gets reduced (slowly). He wants to
    >>>>>>>>>>> reduce the
    >>>>>>>>>>> frequency error only. He does not give a damn about the phase
    >>>>>>>>>>> error
    >>>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error
    >>>>>>>>>>> by attacking
    >>>>>>>>>>> the phase error. That is a slow way of doing it. You want to
    >>>>>>>>>>> estimate the
    >>>>>>>>>>> frequency error directly. Now in his case he is doing so by
    >>>>>>>>>>> measuring the
    >>>>>>>>>>> phase, so you need at least two phase measurements to estimate
    >>>>>>>>>>> the
    >>>>>>>>>>> frequency error. But you do NOT want to reduce the frequency
    >>>>>>>>>>> error by
    >>>>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>>> One way of reducing the frequency error is to use the ntp
    >>>>>>>>>>> procedure but
    >>>>>>>>>>> applied to the frequency. But you must feed in an estimate of
    >>>>>>>>>>> the frequecy
    >>>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase
    >>>>>>>>>>> points, do a
    >>>>>>>>>>> least squares fit to find the frequency, and then use that
    >>>>>>>>>>> information to
    >>>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct
    >>>>>>>>>>> the prior
    >>>>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> (PLL) and it is getting difficult to find any piece of
    >>>>>>>>>>>> electrnics that doesn't include one these days. E.g. the
    >>>>>>>>>>>> typical digitally tuned radio
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A
    >>>>>>>>>>> few resistors
    >>>>>>>>>>> and capacitors. It however is a very simply Markovian process.
    >>>>>>>>>>> There is far
    >>>>>>>>>>> more information in the data than that, and digititally it is
    >>>>>>>>>>> easy to
    >>>>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> or TV has a crystal oscillator, which is divided down to the
    >>>>>>>>>>>> channel spacing or a sub-multiple, and a configurable divider
    >>>>>>>>>>>> on the local oscillator divides that down to the same
    >>>>>>>>>>>> frequency. The resulting two signals are then phase locked,
    >>>>>>>>>>>> by measuring the phase error on each cycle, low pass
    >>>>>>>>>>>> filtering it, and using it to control the local oscillator
    >>>>>>>>>>>> frequency, resulting in their matching in frequency, and
    >>>>>>>>>>>> having some constant phase error.
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>>> the offset twice, and ask if the difference is constant or
    >>>>>>>>>>>>> not. Ie, th
    >>>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> ntpd only uses this method on a cold start, to get the
    >>>>>>>>>>>> initial coarse calibration. Typical electronic
    >>>>>>>>>>>> implementations don't use it at all, but either do a
    >>>>>>>>>>>> frequency sweep or simply open up the low pass filter, to get
    >>>>>>>>>>>> initial lock.
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>> And? You are claiming that that is efficient or easy? I would
    >>>>>>>>>>> claim the
    >>>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He
    >>>>>>>>>>> does not care
    >>>>>>>>>>> about the phase errors. He is onlyconcerned about the
    >>>>>>>>>>> frequency errors.
    >>>>>>>>>>> driving the frequency errors to zero by driving the phase
    >>>>>>>>>>> errors to zero is
    >>>>>>>>>>> not a very efficient technique-- unless of course you want the
    >>>>>>>>>>> phase errors
    >>>>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>


  20. Re: NTP phase lock loop inputs and outputs?

    "David L. Mills" writes:

    >Bruce,


    >Judah and I have come around the issue of white/flicker frequency noise


    What does this sentence mean? Does it mean you have come around to the same
    viewpoint, or that you are still arguing about it.

    >with cold rocks (grotty computer oscillators). Determining accurate


    Exactly why you would call them "cold rocks" and then use them as though
    they meant something to anyone else I do not know.

    >Allan deviation plots is something of a black art and at hazard to
    >insiduous resonances. I found with careful technique, as documented in
    >previous papers and my book, that phase noise and frequency noise
    >characteristics of cold rocks are generally separable, mainly because
    >the flicker noise is so bad. Therefore, the intersection of the phase


    I assume by flicker noise you mean 1/f noise.

    >characteristic and frequency characteristic is generally well defined,
    >which I call the Allan intercept. This is why the NTP discipline is
    >mainly responsive to phase noise below the intercept and frequency noise
    >above.


    Except that the Allan intercept depends on a whole bunch of things,
    including the noise in the measurement process ( network delays and noise
    for example). There is no single Allan intercept. It depends on the system.
    The Allan intercept can be minutes or days depending even on exactly the
    same computer.

    Also as Levine says and my measuements confirm, on the timescale of a few hours, it is the daily temp
    variations, not flicker noise or any other semi Markovian process (ie
    uncorrelated noise in the frequency domain) that dominates. Computers tend
    to get used during the day, and thus heat up during the day, and not at
    night. One would like an adaptive process which tries to measure the
    current Allan or equivalent (ie is the frequency noise due to the white
    measurement noise or is it more correlated) rather than relying on some
    "average" intercept.

    I would note that chrony has an algorithm to try to do just that-- by
    estimating whetehr or not a least squares linear fit to the phase errors is
    a reasonable statistical estimate or whether the measurements indicate that
    this model is failing. Ie, it tries to dynamically adapt to the actual
    noise type.



    >Dave


    >Bruce wrote:


    >> The only thing that you can conclude from the referenced paper is that
    >> for very short measurement intervals, order of a second, the noise
    >> process is white phase. Obviously a simple average is optimal to
    >> determine the time from a group of such closely spaced measurements.
    >>
    >> The paper then describes an algorithm to adapt to the decidedly
    >> non-stationary measurement noise and local oscillator noise processes as
    >> the averaging time is increased. Flicker and random walk of frequency
    >> are mentioned, as well as white frequency. However he attributes this
    >> to the crystal oscillator, which I believe is a mistake. Crystal
    >> oscillators do not exhibit white frequency noise over longer averaging
    >> times. If they did, we probably would not need Rubidium oscillators.
    >>
    >> All of these multiple noise processes operating simultaneously and
    >> completely uncorrelated with each other are what make the precision time
    >> transfer and frequency control art so interesting and allow so many
    >> different ideas for how to best set the clock for the particular
    >> application.
    >>
    >> Bruce
    >>
    >> Unruh wrote:
    >>
    >>> Bruce writes:
    >>>
    >>>> My understanding is that least squares is optimal only when the
    >>>> residuals are white. For measurements of atomic frequency standards
    >>>
    >>>
    >>> Yes, and I clearly made that assumption. And the assumption is generally
    >>> true for short enough time intervals. If the netwrok delays are really
    >>> really short or if the time over which you are determining the
    >>> frequency is
    >>> very long, then that is a bad assumption.
    >>>
    >>>
    >>>> such as Rubidium or Cesium, the noise process is dominated by white
    >>>> frequency noise, and in this case linear regression yields the
    >>>> optimal estimate of frequency. A little snooping around on the NIST
    >>>> web site will provide the relevant backup info.
    >>>
    >>>
    >>>> For so many other precision timing and frequency applications, the
    >>>> noise processes are decidedly un-white. David Allan developed his
    >>>> famous two-sample variance to handle these divergent, non-stationary
    >>>> noise processes. For instance, quartz oscillators are dominated by
    >>>> flicker frequency noise for averaging times greater than about 100
    >>>> milliseconds, and eventually turn to random walk at longer averaging
    >>>> times. Selective Availability of GPS (when it was in effect) was a
    >>>> white phase noise process that modulated the time transfer for
    >>>> un-keyed users. The statistics of network time transfer via ntp are
    >>>> undoubtedly divergent, but I have not seen any data that showed it to
    >>>> be white frequency noise dominant.
    >>>
    >>>
    >>> All the data suggests that it is white and the explicit assumption of
    >>> that
    >>> Levine paper is that it is white.
    >>>
    >>>
    >>>> So, it is not clear that linear regression is optimal for estimating
    >>>> the frequency via ntp, unless someone has determined the statistics
    >>>> to be white frequency. I personally have not performed the
    >>>> measurements to make that determination, but it would not surprise me
    >>>> if Judah Levine has.
    >>>
    >>>
    >>> And he explicitly assumes that the network delay noise is white. His
    >>> whole
    >>> procedure is to make a large ( eg 25-50) of ntp type measurements at one
    >>> time (withing seconds) then wait a long time ( 1/4 of a day) and doing it
    >>> again. He estimates teh frequency by averaging the measurements at any
    >>> one
    >>> time, and then using that average phase error to determine the frequency.
    >>>
    >>> The number of measurements at one instant is determined by requiring that
    >>> the frequency error due to the white noise ( decreases as sqrt(n)) is
    >>> equal
    >>> to the other errors (flicker noise, etc) or equals the pre determined
    >>> error
    >>> level wanted.
    >>>
    >>>
    >>>
    >>>> Bruce
    >>>
    >>>
    >>>> Unruh wrote:
    >>>>
    >>>>> "David L. Mills" writes:
    >>>>>
    >>>>>> Bill,
    >>>>>> Read it again. Judah takes multiple samples to reduce the phase
    >>>>>> noise, not to improve the frequency estimation.
    >>>>>
    >>>>> Dave: The frequency estimate is done by subtracting two phase
    >>>>> determinations. Thus the phase noise enters the frequency
    >>>>> determination. By
    >>>>> reducing the phase noise you reduce the frequency noise as well. I
    >>>>> think
    >>>>> you need to read it again, but us just telling the other to read
    >>>>> properly
    >>>>> will not help.
    >>>>> The frequency estimate is obtained in NTP and in his procedure by
    >>>>> making
    >>>>> phase measurements. f_i= (y_i-y_{i-1})/T
    >>>>> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian
    >>>>> random
    >>>>> variable, then delta f_i= sqrt( +)/T
    >>>>> By reducing you reduce delta f_i. And as you point out, you can
    >>>>> reduce by making a bunch of measurements. Those measurements
    >>>>> can be
    >>>>> all done at the end points or spread over the time interval T. The
    >>>>> latter
    >>>>> is not quite as effective in reducing delta f_i since many of the
    >>>>> measurements do not have as long a "lever arm" as if they were all
    >>>>> at the
    >>>>> endpoints, and that is why uniform sampling is about sqrt(3) worse than
    >>>>> clustering at the end points. But in either case, the more
    >>>>> measurements you
    >>>>> make the more you reduce the uncertainty in the frequency estimate.
    >>>>> Anyway, at this point everyone else has enough information to make
    >>>>> up their
    >>>>> own mind.
    >>>>>
    >>>>>
    >>>>>> Dave
    >>>>>> Unruh wrote:
    >>>>>>
    >>>>>>> You must have read a different paper than that one. I found it
    >>>>>>> (through our
    >>>>>>> library) and it says that if you have n measurements in a time
    >>>>>>> period T,
    >>>>>>> the best strategy is to take n/2 measurements at the beginning of
    >>>>>>> the time
    >>>>>>> and n/2 at the end to minimize the effect of the white noise phase
    >>>>>>> error on the
    >>>>>>> frequency estimate. That is perfectly true, and gives an error
    >>>>>>> which goes
    >>>>>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally
    >>>>>>> spaced
    >>>>>>> measurements (assuming large n) T is the total time interval and
    >>>>>>> delta is the std dev of each phase measurement . But it certainly
    >>>>>>> does NOT say that if you have n
    >>>>>>> measurements, just use the first and last one to estimate the slope.
    >>>>>>> If you have n measurements, the best estimate of the slope is to
    >>>>>>> do a least
    >>>>>>> squares fit. If they are equally spaced, the center third do not
    >>>>>>> help much
    >>>>>>> (nor do they hinder), but a least squares fit is always the best
    >>>>>>> thing to
    >>>>>>> do.
    >>>>>>>
    >>>>>>> "David L. Mills" writes:
    >>>>>>>
    >>>>>>>
    >>>>>>>> Bill,
    >>>>>>>> NIST doesn't agree with you. Only the first and last are truly
    >>>>>>>> significant. Reference: Levine, J. Time synchronization over the
    >>>>>>>> Internet using an adaptive frequency locked loop. IEEE Trans.
    >>>>>>>> UFFC, 46(4), 888-896, 1999.
    >>>>>>>> Dave
    >>>>>>>> Unruh wrote:
    >>>>>>>>
    >>>>>>>>> "David L. Mills" writes:
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Bill,
    >>>>>>>>>> Ahem. The first point I made was that least-squares doesn't
    >>>>>>>>>> help the frequency estimate. The next point you made is that
    >>>>>>>>>> least-squares improves the phase estimate. The last point you
    >>>>>>>>>> made is that phase noise
    >>>>>>>>>
    >>>>>>>>> No. The point I tried to make was the least squares improved the
    >>>>>>>>> FREQUENCY estimate by sqrt(n/6) for large n, where n is the
    >>>>>>>>> number of points (assumed
    >>>>>>>>> equally spaced) at which the phase is measured. I am sorry that
    >>>>>>>>> the way I
    >>>>>>>>> phrased it could have been misunderstood.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>> The phase is ALSO improved proportional to sqrt(n)
    >>>>>>>>> . This assumes uncorrelated phase errors dominate the error budget.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> is not important. Our points have been made and further
    >>>>>>>>>> discussion would be boring.
    >>>>>>>>>
    >>>>>>>>> Except you misunderstood my point. It may still be boring to you.
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>
    >>>>>>>>>> Dave
    >>>>>>>>>> Unruh wrote:
    >>>>>>>>>>
    >>>>>>>>>>
    >>>>>>>>>>> "David L. Mills" writes:
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> Bill,
    >>>>>>>>>>>> If you need only the frequency, least-squares doesn't help a
    >>>>>>>>>>>> lot; all you need are the first and last points during the
    >>>>>>>>>>>> measurement interval.
    >>>>>>>>>>>
    >>>>>>>>>>> Well, no. If you have random phase noise, a least squares fit
    >>>>>>>>>>> will improve
    >>>>>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number
    >>>>>>>>>>> of points.
    >>>>>>>>>>> That can be significant. It is certainly true that the end
    >>>>>>>>>>> points have the
    >>>>>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have
    >>>>>>>>>>> 64 points,
    >>>>>>>>>>> you are better by about a factor of 4 which is not insignificant.
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the
    >>>>>>>>>>>> frequency directly and exponentially average successive
    >>>>>>>>>>>> intervals. The NTP discipline is in fact a hybrid PLL/FLL
    >>>>>>>>>>>> where the PLL dominates below the Allan intercept and FLL
    >>>>>>>>>>>> above it and also when started without a frequency file. The
    >>>>>>>>>>>> trick is to separate the phase component from the frequency
    >>>>>>>>>>>> component, which requires some delicate computations. This
    >>>>>>>>>>>> allows the frequency to be accurately computed as above, yet
    >>>>>>>>>>>> allows a phase correction during the measurement interval.
    >>>>>>>>>>>
    >>>>>>>>>>> He of course is not interested in phase corrections.
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>
    >>>>>>>>>>>> Dave
    >>>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>
    >>>>>>>>>>>>> David Woolley writes:
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>> Unruh wrote:
    >>>>>>>>>>>>>>
    >>>>>>>>>>>>>>> I do not understand this. You seem to be measuring the
    >>>>>>>>>>>>>>> offsets, not the
    >>>>>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do
    >>>>>>>>>>>>>>> is to measure
    >>>>>>>>>>>>>>
    >>>>>>>>>>>>>> Measuring phase error to control frequency is pretty much
    >>>>>>>>>>>>>> THE standard way of doing it in modern electronics. It's
    >>>>>>>>>>>>>> called a phase locked loop
    >>>>>>>>>>>>>
    >>>>>>>>>>>>> Sure. In the case of ntp you want to have zero phase error.
    >>>>>>>>>>>>> ntp reduces the
    >>>>>>>>>>>>> phase error slowly by changing the frequency. This has the
    >>>>>>>>>>>>> advantage that
    >>>>>>>>>>>>> the frequency error also gets reduced (slowly). He wants to
    >>>>>>>>>>>>> reduce the
    >>>>>>>>>>>>> frequency error only. He does not give a damn about the
    >>>>>>>>>>>>> phase error
    >>>>>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy
    >>>>>>>>>>>>> error by attacking
    >>>>>>>>>>>>> the phase error. That is a slow way of doing it. You want to
    >>>>>>>>>>>>> estimate the
    >>>>>>>>>>>>> frequency error directly. Now in his case he is doing so by
    >>>>>>>>>>>>> measuring the
    >>>>>>>>>>>>> phase, so you need at least two phase measurements to
    >>>>>>>>>>>>> estimate the
    >>>>>>>>>>>>> frequency error. But you do NOT want to reduce the frequency
    >>>>>>>>>>>>> error by
    >>>>>>>>>>>>> reducing the phase error-- far too slow.
    >>>>>>>>>>>>> One way of reducing the frequency error is to use the ntp
    >>>>>>>>>>>>> procedure but
    >>>>>>>>>>>>> applied to the frequency. But you must feed in an estimate
    >>>>>>>>>>>>> of the frequecy
    >>>>>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase
    >>>>>>>>>>>>> points, do a
    >>>>>>>>>>>>> least squares fit to find the frequency, and then use that
    >>>>>>>>>>>>> information to
    >>>>>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct
    >>>>>>>>>>>>> the prior
    >>>>>>>>>>>>> phase measurements by the change in frequency.
    >>>>>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>> (PLL) and it is getting difficult to find any piece of
    >>>>>>>>>>>>>> electrnics that doesn't include one these days. E.g. the
    >>>>>>>>>>>>>> typical digitally tuned radio
    >>>>>>>>>>>>>
    >>>>>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A
    >>>>>>>>>>>>> few resistors
    >>>>>>>>>>>>> and capacitors. It however is a very simply Markovian
    >>>>>>>>>>>>> process. There is far
    >>>>>>>>>>>>> more information in the data than that, and digititally it
    >>>>>>>>>>>>> is easy to
    >>>>>>>>>>>>> impliment far more complex feedback loops than that.
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>> or TV has a crystal oscillator, which is divided down to
    >>>>>>>>>>>>>> the channel spacing or a sub-multiple, and a configurable
    >>>>>>>>>>>>>> divider on the local oscillator divides that down to the
    >>>>>>>>>>>>>> same frequency. The resulting two signals are then phase
    >>>>>>>>>>>>>> locked, by measuring the phase error on each cycle, low
    >>>>>>>>>>>>>> pass filtering it, and using it to control the local
    >>>>>>>>>>>>>> oscillator frequency, resulting in their matching in
    >>>>>>>>>>>>>> frequency, and having some constant phase error.
    >>>>>>>>>>>>>>
    >>>>>>>>>>>>>>> the offset twice, and ask if the difference is constant or
    >>>>>>>>>>>>>>> not. Ie, th
    >>>>>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
    >>>>>>>>>>>>>>
    >>>>>>>>>>>>>> ntpd only uses this method on a cold start, to get the
    >>>>>>>>>>>>>> initial coarse calibration. Typical electronic
    >>>>>>>>>>>>>> implementations don't use it at all, but either do a
    >>>>>>>>>>>>>> frequency sweep or simply open up the low pass filter, to
    >>>>>>>>>>>>>> get initial lock.
    >>>>>>>>>>>>>
    >>>>>>>>>>>>> And? You are claiming that that is efficient or easy? I
    >>>>>>>>>>>>> would claim the
    >>>>>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He
    >>>>>>>>>>>>> does not care
    >>>>>>>>>>>>> about the phase errors. He is onlyconcerned about the
    >>>>>>>>>>>>> frequency errors.
    >>>>>>>>>>>>> driving the frequency errors to zero by driving the phase
    >>>>>>>>>>>>> errors to zero is
    >>>>>>>>>>>>> not a very efficient technique-- unless of course you want
    >>>>>>>>>>>>> the phase errors
    >>>>>>>>>>>>> to be zero( as ntp does, and he does not).
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>
    >>>>>>>>>>>>>


+ Reply to Thread
Page 2 of 2 FirstFirst 1 2